00:00:00.001 Started by upstream project "autotest-nightly" build number 4308 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3671 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.097 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.098 The recommended git tool is: git 00:00:00.098 using credential 00000000-0000-0000-0000-000000000002 00:00:00.112 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.131 Fetching changes from the remote Git repository 00:00:00.139 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.160 Using shallow fetch with depth 1 00:00:00.160 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.160 > git --version # timeout=10 00:00:00.183 > git --version # 'git version 2.39.2' 00:00:00.183 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.228 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.228 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.762 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.771 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.782 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.782 > git config core.sparsecheckout # timeout=10 00:00:04.793 > git read-tree -mu HEAD # timeout=10 00:00:04.808 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.830 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.830 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.910 [Pipeline] Start of Pipeline 00:00:04.924 [Pipeline] library 00:00:04.925 Loading library shm_lib@master 00:00:04.925 Library shm_lib@master is cached. Copying from home. 00:00:04.937 [Pipeline] node 00:00:04.958 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:04.960 [Pipeline] { 00:00:04.969 [Pipeline] catchError 00:00:04.970 [Pipeline] { 00:00:04.979 [Pipeline] wrap 00:00:04.987 [Pipeline] { 00:00:04.993 [Pipeline] stage 00:00:04.994 [Pipeline] { (Prologue) 00:00:05.008 [Pipeline] echo 00:00:05.009 Node: VM-host-WFP7 00:00:05.014 [Pipeline] cleanWs 00:00:05.027 [WS-CLEANUP] Deleting project workspace... 00:00:05.027 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.034 [WS-CLEANUP] done 00:00:05.209 [Pipeline] setCustomBuildProperty 00:00:05.312 [Pipeline] httpRequest 00:00:05.952 [Pipeline] echo 00:00:05.956 Sorcerer 10.211.164.20 is alive 00:00:05.965 [Pipeline] retry 00:00:05.968 [Pipeline] { 00:00:05.978 [Pipeline] httpRequest 00:00:05.982 HttpMethod: GET 00:00:05.983 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.983 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.983 Response Code: HTTP/1.1 200 OK 00:00:05.984 Success: Status code 200 is in the accepted range: 200,404 00:00:05.984 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.270 [Pipeline] } 00:00:06.292 [Pipeline] // retry 00:00:06.299 [Pipeline] sh 00:00:06.587 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.603 [Pipeline] httpRequest 00:00:06.925 [Pipeline] echo 00:00:06.926 Sorcerer 10.211.164.20 is alive 00:00:06.961 [Pipeline] retry 00:00:06.964 [Pipeline] { 00:00:06.977 [Pipeline] httpRequest 00:00:06.982 HttpMethod: GET 00:00:06.982 URL: http://10.211.164.20/packages/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:00:06.983 Sending request to url: http://10.211.164.20/packages/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:00:06.995 Response Code: HTTP/1.1 200 OK 00:00:06.996 Success: Status code 200 is in the accepted range: 200,404 00:00:06.996 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:02:07.911 [Pipeline] } 00:02:07.927 [Pipeline] // retry 00:02:07.934 [Pipeline] sh 00:02:08.216 + tar --no-same-owner -xf spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:02:10.769 [Pipeline] sh 00:02:11.053 + git -C spdk log --oneline -n5 00:02:11.053 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:02:11.053 5592070b3 doc: update nvmf_tracing.md 00:02:11.053 5ca6db5da nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:02:11.053 f7ce15267 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:02:11.053 aa58c9e0b dif: Add spdk_dif_pi_format_get_size() to use for NVMe PRACT 00:02:11.068 [Pipeline] writeFile 00:02:11.077 [Pipeline] sh 00:02:11.356 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:11.368 [Pipeline] sh 00:02:11.652 + cat autorun-spdk.conf 00:02:11.652 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:11.652 SPDK_RUN_ASAN=1 00:02:11.652 SPDK_RUN_UBSAN=1 00:02:11.652 SPDK_TEST_RAID=1 00:02:11.652 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:11.659 RUN_NIGHTLY=1 00:02:11.661 [Pipeline] } 00:02:11.675 [Pipeline] // stage 00:02:11.691 [Pipeline] stage 00:02:11.693 [Pipeline] { (Run VM) 00:02:11.705 [Pipeline] sh 00:02:11.990 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:11.990 + echo 'Start stage prepare_nvme.sh' 00:02:11.990 Start stage prepare_nvme.sh 00:02:11.990 + [[ -n 3 ]] 00:02:11.990 + disk_prefix=ex3 00:02:11.990 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:02:11.990 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:02:11.990 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:02:11.990 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:11.990 ++ SPDK_RUN_ASAN=1 00:02:11.990 ++ SPDK_RUN_UBSAN=1 00:02:11.990 ++ SPDK_TEST_RAID=1 00:02:11.990 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:11.990 ++ RUN_NIGHTLY=1 00:02:11.990 + cd /var/jenkins/workspace/raid-vg-autotest 00:02:11.990 + nvme_files=() 00:02:11.990 + declare -A nvme_files 00:02:11.990 + backend_dir=/var/lib/libvirt/images/backends 00:02:11.990 + nvme_files['nvme.img']=5G 00:02:11.990 + nvme_files['nvme-cmb.img']=5G 00:02:11.990 + nvme_files['nvme-multi0.img']=4G 00:02:11.990 + nvme_files['nvme-multi1.img']=4G 00:02:11.990 + nvme_files['nvme-multi2.img']=4G 00:02:11.990 + nvme_files['nvme-openstack.img']=8G 00:02:11.990 + nvme_files['nvme-zns.img']=5G 00:02:11.990 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:11.990 + (( SPDK_TEST_FTL == 1 )) 00:02:11.990 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:11.990 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:11.990 + for nvme in "${!nvme_files[@]}" 00:02:11.990 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:02:11.990 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:11.990 + for nvme in "${!nvme_files[@]}" 00:02:11.990 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:02:11.990 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:11.990 + for nvme in "${!nvme_files[@]}" 00:02:11.990 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:02:11.990 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:11.990 + for nvme in "${!nvme_files[@]}" 00:02:11.990 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:02:11.990 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:11.990 + for nvme in "${!nvme_files[@]}" 00:02:11.990 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:02:11.990 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:11.990 + for nvme in "${!nvme_files[@]}" 00:02:11.990 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:02:11.990 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:11.990 + for nvme in "${!nvme_files[@]}" 00:02:11.990 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:02:12.250 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:12.250 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:02:12.250 + echo 'End stage prepare_nvme.sh' 00:02:12.250 End stage prepare_nvme.sh 00:02:12.262 [Pipeline] sh 00:02:12.547 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:12.547 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:02:12.547 00:02:12.547 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:02:12.547 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:02:12.547 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:02:12.547 HELP=0 00:02:12.547 DRY_RUN=0 00:02:12.547 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:02:12.547 NVME_DISKS_TYPE=nvme,nvme, 00:02:12.547 NVME_AUTO_CREATE=0 00:02:12.547 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:02:12.547 NVME_CMB=,, 00:02:12.547 NVME_PMR=,, 00:02:12.547 NVME_ZNS=,, 00:02:12.547 NVME_MS=,, 00:02:12.547 NVME_FDP=,, 00:02:12.547 SPDK_VAGRANT_DISTRO=fedora39 00:02:12.547 SPDK_VAGRANT_VMCPU=10 00:02:12.547 SPDK_VAGRANT_VMRAM=12288 00:02:12.547 SPDK_VAGRANT_PROVIDER=libvirt 00:02:12.547 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:12.547 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:12.547 SPDK_OPENSTACK_NETWORK=0 00:02:12.547 VAGRANT_PACKAGE_BOX=0 00:02:12.547 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:12.547 FORCE_DISTRO=true 00:02:12.547 VAGRANT_BOX_VERSION= 00:02:12.547 EXTRA_VAGRANTFILES= 00:02:12.547 NIC_MODEL=virtio 00:02:12.547 00:02:12.547 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:02:12.547 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:02:14.472 Bringing machine 'default' up with 'libvirt' provider... 00:02:15.041 ==> default: Creating image (snapshot of base box volume). 00:02:15.041 ==> default: Creating domain with the following settings... 00:02:15.041 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732681151_1bc023a66f6a4f17e4ad 00:02:15.041 ==> default: -- Domain type: kvm 00:02:15.041 ==> default: -- Cpus: 10 00:02:15.041 ==> default: -- Feature: acpi 00:02:15.041 ==> default: -- Feature: apic 00:02:15.041 ==> default: -- Feature: pae 00:02:15.041 ==> default: -- Memory: 12288M 00:02:15.041 ==> default: -- Memory Backing: hugepages: 00:02:15.041 ==> default: -- Management MAC: 00:02:15.041 ==> default: -- Loader: 00:02:15.041 ==> default: -- Nvram: 00:02:15.041 ==> default: -- Base box: spdk/fedora39 00:02:15.041 ==> default: -- Storage pool: default 00:02:15.041 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732681151_1bc023a66f6a4f17e4ad.img (20G) 00:02:15.041 ==> default: -- Volume Cache: default 00:02:15.041 ==> default: -- Kernel: 00:02:15.041 ==> default: -- Initrd: 00:02:15.041 ==> default: -- Graphics Type: vnc 00:02:15.041 ==> default: -- Graphics Port: -1 00:02:15.041 ==> default: -- Graphics IP: 127.0.0.1 00:02:15.041 ==> default: -- Graphics Password: Not defined 00:02:15.041 ==> default: -- Video Type: cirrus 00:02:15.041 ==> default: -- Video VRAM: 9216 00:02:15.041 ==> default: -- Sound Type: 00:02:15.041 ==> default: -- Keymap: en-us 00:02:15.041 ==> default: -- TPM Path: 00:02:15.041 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:15.041 ==> default: -- Command line args: 00:02:15.041 ==> default: -> value=-device, 00:02:15.041 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:15.041 ==> default: -> value=-drive, 00:02:15.041 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:02:15.041 ==> default: -> value=-device, 00:02:15.041 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:15.041 ==> default: -> value=-device, 00:02:15.041 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:15.041 ==> default: -> value=-drive, 00:02:15.041 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:15.041 ==> default: -> value=-device, 00:02:15.042 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:15.042 ==> default: -> value=-drive, 00:02:15.042 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:15.042 ==> default: -> value=-device, 00:02:15.042 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:15.042 ==> default: -> value=-drive, 00:02:15.042 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:15.042 ==> default: -> value=-device, 00:02:15.042 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:15.301 ==> default: Creating shared folders metadata... 00:02:15.301 ==> default: Starting domain. 00:02:16.682 ==> default: Waiting for domain to get an IP address... 00:02:34.826 ==> default: Waiting for SSH to become available... 00:02:34.826 ==> default: Configuring and enabling network interfaces... 00:02:40.110 default: SSH address: 192.168.121.105:22 00:02:40.110 default: SSH username: vagrant 00:02:40.110 default: SSH auth method: private key 00:02:42.667 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:50.796 ==> default: Mounting SSHFS shared folder... 00:02:53.338 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:53.338 ==> default: Checking Mount.. 00:02:54.721 ==> default: Folder Successfully Mounted! 00:02:54.721 ==> default: Running provisioner: file... 00:02:55.661 default: ~/.gitconfig => .gitconfig 00:02:56.230 00:02:56.230 SUCCESS! 00:02:56.230 00:02:56.230 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:56.230 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:56.230 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:56.230 00:02:56.239 [Pipeline] } 00:02:56.253 [Pipeline] // stage 00:02:56.261 [Pipeline] dir 00:02:56.261 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:56.263 [Pipeline] { 00:02:56.275 [Pipeline] catchError 00:02:56.276 [Pipeline] { 00:02:56.288 [Pipeline] sh 00:02:56.574 + vagrant ssh-config --host vagrant 00:02:56.574 + sed -ne /^Host/,$p 00:02:56.574 + tee ssh_conf 00:02:59.116 Host vagrant 00:02:59.116 HostName 192.168.121.105 00:02:59.116 User vagrant 00:02:59.116 Port 22 00:02:59.116 UserKnownHostsFile /dev/null 00:02:59.117 StrictHostKeyChecking no 00:02:59.117 PasswordAuthentication no 00:02:59.117 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:59.117 IdentitiesOnly yes 00:02:59.117 LogLevel FATAL 00:02:59.117 ForwardAgent yes 00:02:59.117 ForwardX11 yes 00:02:59.117 00:02:59.132 [Pipeline] withEnv 00:02:59.134 [Pipeline] { 00:02:59.149 [Pipeline] sh 00:02:59.434 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:59.434 source /etc/os-release 00:02:59.435 [[ -e /image.version ]] && img=$(< /image.version) 00:02:59.435 # Minimal, systemd-like check. 00:02:59.435 if [[ -e /.dockerenv ]]; then 00:02:59.435 # Clear garbage from the node's name: 00:02:59.435 # agt-er_autotest_547-896 -> autotest_547-896 00:02:59.435 # $HOSTNAME is the actual container id 00:02:59.435 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:59.435 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:59.435 # We can assume this is a mount from a host where container is running, 00:02:59.435 # so fetch its hostname to easily identify the target swarm worker. 00:02:59.435 container="$(< /etc/hostname) ($agent)" 00:02:59.435 else 00:02:59.435 # Fallback 00:02:59.435 container=$agent 00:02:59.435 fi 00:02:59.435 fi 00:02:59.435 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:59.435 00:02:59.708 [Pipeline] } 00:02:59.724 [Pipeline] // withEnv 00:02:59.735 [Pipeline] setCustomBuildProperty 00:02:59.751 [Pipeline] stage 00:02:59.753 [Pipeline] { (Tests) 00:02:59.771 [Pipeline] sh 00:03:00.056 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:00.334 [Pipeline] sh 00:03:00.620 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:00.898 [Pipeline] timeout 00:03:00.898 Timeout set to expire in 1 hr 30 min 00:03:00.900 [Pipeline] { 00:03:00.915 [Pipeline] sh 00:03:01.200 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:01.770 HEAD is now at 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:03:01.784 [Pipeline] sh 00:03:02.068 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:02.341 [Pipeline] sh 00:03:02.626 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:02.903 [Pipeline] sh 00:03:03.186 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:03:03.445 ++ readlink -f spdk_repo 00:03:03.445 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:03.445 + [[ -n /home/vagrant/spdk_repo ]] 00:03:03.445 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:03.445 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:03.445 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:03.445 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:03.445 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:03.445 + [[ raid-vg-autotest == pkgdep-* ]] 00:03:03.445 + cd /home/vagrant/spdk_repo 00:03:03.445 + source /etc/os-release 00:03:03.445 ++ NAME='Fedora Linux' 00:03:03.445 ++ VERSION='39 (Cloud Edition)' 00:03:03.445 ++ ID=fedora 00:03:03.445 ++ VERSION_ID=39 00:03:03.445 ++ VERSION_CODENAME= 00:03:03.445 ++ PLATFORM_ID=platform:f39 00:03:03.445 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:03.445 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:03.445 ++ LOGO=fedora-logo-icon 00:03:03.445 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:03.445 ++ HOME_URL=https://fedoraproject.org/ 00:03:03.445 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:03.445 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:03.445 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:03.445 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:03.445 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:03.445 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:03.445 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:03.445 ++ SUPPORT_END=2024-11-12 00:03:03.445 ++ VARIANT='Cloud Edition' 00:03:03.445 ++ VARIANT_ID=cloud 00:03:03.445 + uname -a 00:03:03.445 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:03.445 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:04.047 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:04.047 Hugepages 00:03:04.047 node hugesize free / total 00:03:04.047 node0 1048576kB 0 / 0 00:03:04.047 node0 2048kB 0 / 0 00:03:04.047 00:03:04.047 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:04.047 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:04.047 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:04.047 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:04.047 + rm -f /tmp/spdk-ld-path 00:03:04.047 + source autorun-spdk.conf 00:03:04.047 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:04.047 ++ SPDK_RUN_ASAN=1 00:03:04.047 ++ SPDK_RUN_UBSAN=1 00:03:04.047 ++ SPDK_TEST_RAID=1 00:03:04.047 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:04.047 ++ RUN_NIGHTLY=1 00:03:04.047 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:04.047 + [[ -n '' ]] 00:03:04.047 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:04.047 + for M in /var/spdk/build-*-manifest.txt 00:03:04.047 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:04.047 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:04.047 + for M in /var/spdk/build-*-manifest.txt 00:03:04.047 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:04.047 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:04.307 + for M in /var/spdk/build-*-manifest.txt 00:03:04.307 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:04.307 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:04.307 ++ uname 00:03:04.307 + [[ Linux == \L\i\n\u\x ]] 00:03:04.307 + sudo dmesg -T 00:03:04.307 + sudo dmesg --clear 00:03:04.307 + dmesg_pid=5431 00:03:04.307 + [[ Fedora Linux == FreeBSD ]] 00:03:04.307 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:04.307 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:04.307 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:04.307 + [[ -x /usr/src/fio-static/fio ]] 00:03:04.307 + sudo dmesg -Tw 00:03:04.307 + export FIO_BIN=/usr/src/fio-static/fio 00:03:04.307 + FIO_BIN=/usr/src/fio-static/fio 00:03:04.307 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:04.307 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:04.307 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:04.307 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:04.307 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:04.307 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:04.307 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:04.307 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:04.307 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:04.307 04:20:00 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:04.307 04:20:00 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:04.307 04:20:00 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:04.307 04:20:00 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:03:04.307 04:20:00 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:03:04.307 04:20:00 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:03:04.307 04:20:00 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:04.307 04:20:00 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=1 00:03:04.307 04:20:00 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:04.307 04:20:00 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:04.568 04:20:00 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:04.568 04:20:00 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:04.568 04:20:00 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:04.568 04:20:00 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:04.568 04:20:00 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:04.568 04:20:00 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:04.568 04:20:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.568 04:20:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.568 04:20:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.568 04:20:00 -- paths/export.sh@5 -- $ export PATH 00:03:04.568 04:20:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.568 04:20:00 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:04.568 04:20:00 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:04.568 04:20:00 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732681200.XXXXXX 00:03:04.568 04:20:00 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732681200.fRfbhw 00:03:04.568 04:20:00 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:04.568 04:20:00 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:04.568 04:20:00 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:04.568 04:20:00 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:04.568 04:20:00 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:04.568 04:20:00 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:04.568 04:20:00 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:04.568 04:20:00 -- common/autotest_common.sh@10 -- $ set +x 00:03:04.568 04:20:00 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:03:04.568 04:20:00 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:04.568 04:20:00 -- pm/common@17 -- $ local monitor 00:03:04.568 04:20:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.568 04:20:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.568 04:20:00 -- pm/common@25 -- $ sleep 1 00:03:04.568 04:20:00 -- pm/common@21 -- $ date +%s 00:03:04.568 04:20:00 -- pm/common@21 -- $ date +%s 00:03:04.568 04:20:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732681200 00:03:04.568 04:20:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732681200 00:03:04.568 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732681200_collect-cpu-load.pm.log 00:03:04.568 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732681200_collect-vmstat.pm.log 00:03:05.510 04:20:01 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:05.510 04:20:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:05.510 04:20:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:05.510 04:20:01 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:05.510 04:20:01 -- spdk/autobuild.sh@16 -- $ date -u 00:03:05.510 Wed Nov 27 04:20:01 AM UTC 2024 00:03:05.510 04:20:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:05.510 v25.01-pre-271-g2f2acf4eb 00:03:05.510 04:20:02 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:05.510 04:20:02 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:05.510 04:20:02 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:05.510 04:20:02 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:05.510 04:20:02 -- common/autotest_common.sh@10 -- $ set +x 00:03:05.510 ************************************ 00:03:05.510 START TEST asan 00:03:05.510 ************************************ 00:03:05.510 using asan 00:03:05.510 04:20:02 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:03:05.510 00:03:05.510 real 0m0.000s 00:03:05.510 user 0m0.000s 00:03:05.510 sys 0m0.000s 00:03:05.510 04:20:02 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:05.510 04:20:02 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:05.510 ************************************ 00:03:05.510 END TEST asan 00:03:05.510 ************************************ 00:03:05.771 04:20:02 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:05.771 04:20:02 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:05.771 04:20:02 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:05.771 04:20:02 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:05.771 04:20:02 -- common/autotest_common.sh@10 -- $ set +x 00:03:05.771 ************************************ 00:03:05.771 START TEST ubsan 00:03:05.771 ************************************ 00:03:05.771 using ubsan 00:03:05.771 04:20:02 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:05.771 00:03:05.771 real 0m0.000s 00:03:05.771 user 0m0.000s 00:03:05.771 sys 0m0.000s 00:03:05.771 04:20:02 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:05.771 04:20:02 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:05.771 ************************************ 00:03:05.772 END TEST ubsan 00:03:05.772 ************************************ 00:03:05.772 04:20:02 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:05.772 04:20:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:05.772 04:20:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:05.772 04:20:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:05.772 04:20:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:05.772 04:20:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:05.772 04:20:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:05.772 04:20:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:05.772 04:20:02 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:03:05.772 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:05.772 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:06.341 Using 'verbs' RDMA provider 00:03:25.415 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:40.316 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:40.316 Creating mk/config.mk...done. 00:03:40.316 Creating mk/cc.flags.mk...done. 00:03:40.316 Type 'make' to build. 00:03:40.316 04:20:35 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:40.316 04:20:35 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:40.316 04:20:35 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:40.316 04:20:35 -- common/autotest_common.sh@10 -- $ set +x 00:03:40.316 ************************************ 00:03:40.316 START TEST make 00:03:40.316 ************************************ 00:03:40.316 04:20:35 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:40.316 make[1]: Nothing to be done for 'all'. 00:03:50.364 The Meson build system 00:03:50.364 Version: 1.5.0 00:03:50.364 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:50.364 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:50.364 Build type: native build 00:03:50.364 Program cat found: YES (/usr/bin/cat) 00:03:50.364 Project name: DPDK 00:03:50.364 Project version: 24.03.0 00:03:50.364 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:50.364 C linker for the host machine: cc ld.bfd 2.40-14 00:03:50.364 Host machine cpu family: x86_64 00:03:50.364 Host machine cpu: x86_64 00:03:50.364 Message: ## Building in Developer Mode ## 00:03:50.364 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:50.364 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:50.364 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:50.364 Program python3 found: YES (/usr/bin/python3) 00:03:50.364 Program cat found: YES (/usr/bin/cat) 00:03:50.364 Compiler for C supports arguments -march=native: YES 00:03:50.364 Checking for size of "void *" : 8 00:03:50.364 Checking for size of "void *" : 8 (cached) 00:03:50.364 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:50.364 Library m found: YES 00:03:50.364 Library numa found: YES 00:03:50.364 Has header "numaif.h" : YES 00:03:50.364 Library fdt found: NO 00:03:50.364 Library execinfo found: NO 00:03:50.364 Has header "execinfo.h" : YES 00:03:50.364 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:50.364 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:50.364 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:50.364 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:50.364 Run-time dependency openssl found: YES 3.1.1 00:03:50.364 Run-time dependency libpcap found: YES 1.10.4 00:03:50.364 Has header "pcap.h" with dependency libpcap: YES 00:03:50.364 Compiler for C supports arguments -Wcast-qual: YES 00:03:50.364 Compiler for C supports arguments -Wdeprecated: YES 00:03:50.364 Compiler for C supports arguments -Wformat: YES 00:03:50.364 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:50.364 Compiler for C supports arguments -Wformat-security: NO 00:03:50.364 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:50.364 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:50.364 Compiler for C supports arguments -Wnested-externs: YES 00:03:50.364 Compiler for C supports arguments -Wold-style-definition: YES 00:03:50.364 Compiler for C supports arguments -Wpointer-arith: YES 00:03:50.364 Compiler for C supports arguments -Wsign-compare: YES 00:03:50.364 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:50.364 Compiler for C supports arguments -Wundef: YES 00:03:50.364 Compiler for C supports arguments -Wwrite-strings: YES 00:03:50.364 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:50.364 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:50.364 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:50.364 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:50.364 Program objdump found: YES (/usr/bin/objdump) 00:03:50.364 Compiler for C supports arguments -mavx512f: YES 00:03:50.364 Checking if "AVX512 checking" compiles: YES 00:03:50.364 Fetching value of define "__SSE4_2__" : 1 00:03:50.364 Fetching value of define "__AES__" : 1 00:03:50.364 Fetching value of define "__AVX__" : 1 00:03:50.364 Fetching value of define "__AVX2__" : 1 00:03:50.364 Fetching value of define "__AVX512BW__" : 1 00:03:50.364 Fetching value of define "__AVX512CD__" : 1 00:03:50.364 Fetching value of define "__AVX512DQ__" : 1 00:03:50.364 Fetching value of define "__AVX512F__" : 1 00:03:50.364 Fetching value of define "__AVX512VL__" : 1 00:03:50.364 Fetching value of define "__PCLMUL__" : 1 00:03:50.364 Fetching value of define "__RDRND__" : 1 00:03:50.364 Fetching value of define "__RDSEED__" : 1 00:03:50.364 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:50.364 Fetching value of define "__znver1__" : (undefined) 00:03:50.364 Fetching value of define "__znver2__" : (undefined) 00:03:50.364 Fetching value of define "__znver3__" : (undefined) 00:03:50.364 Fetching value of define "__znver4__" : (undefined) 00:03:50.364 Library asan found: YES 00:03:50.364 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:50.364 Message: lib/log: Defining dependency "log" 00:03:50.364 Message: lib/kvargs: Defining dependency "kvargs" 00:03:50.364 Message: lib/telemetry: Defining dependency "telemetry" 00:03:50.364 Library rt found: YES 00:03:50.364 Checking for function "getentropy" : NO 00:03:50.364 Message: lib/eal: Defining dependency "eal" 00:03:50.364 Message: lib/ring: Defining dependency "ring" 00:03:50.364 Message: lib/rcu: Defining dependency "rcu" 00:03:50.364 Message: lib/mempool: Defining dependency "mempool" 00:03:50.364 Message: lib/mbuf: Defining dependency "mbuf" 00:03:50.364 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:50.364 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:50.364 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:50.364 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:50.364 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:50.364 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:50.364 Compiler for C supports arguments -mpclmul: YES 00:03:50.364 Compiler for C supports arguments -maes: YES 00:03:50.364 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:50.364 Compiler for C supports arguments -mavx512bw: YES 00:03:50.364 Compiler for C supports arguments -mavx512dq: YES 00:03:50.364 Compiler for C supports arguments -mavx512vl: YES 00:03:50.364 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:50.364 Compiler for C supports arguments -mavx2: YES 00:03:50.364 Compiler for C supports arguments -mavx: YES 00:03:50.364 Message: lib/net: Defining dependency "net" 00:03:50.364 Message: lib/meter: Defining dependency "meter" 00:03:50.364 Message: lib/ethdev: Defining dependency "ethdev" 00:03:50.364 Message: lib/pci: Defining dependency "pci" 00:03:50.364 Message: lib/cmdline: Defining dependency "cmdline" 00:03:50.364 Message: lib/hash: Defining dependency "hash" 00:03:50.364 Message: lib/timer: Defining dependency "timer" 00:03:50.364 Message: lib/compressdev: Defining dependency "compressdev" 00:03:50.364 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:50.364 Message: lib/dmadev: Defining dependency "dmadev" 00:03:50.365 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:50.365 Message: lib/power: Defining dependency "power" 00:03:50.365 Message: lib/reorder: Defining dependency "reorder" 00:03:50.365 Message: lib/security: Defining dependency "security" 00:03:50.365 Has header "linux/userfaultfd.h" : YES 00:03:50.365 Has header "linux/vduse.h" : YES 00:03:50.365 Message: lib/vhost: Defining dependency "vhost" 00:03:50.365 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:50.365 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:50.365 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:50.365 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:50.365 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:50.365 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:50.365 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:50.365 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:50.365 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:50.365 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:50.365 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:50.365 Configuring doxy-api-html.conf using configuration 00:03:50.365 Configuring doxy-api-man.conf using configuration 00:03:50.365 Program mandb found: YES (/usr/bin/mandb) 00:03:50.365 Program sphinx-build found: NO 00:03:50.365 Configuring rte_build_config.h using configuration 00:03:50.365 Message: 00:03:50.365 ================= 00:03:50.365 Applications Enabled 00:03:50.365 ================= 00:03:50.365 00:03:50.365 apps: 00:03:50.365 00:03:50.365 00:03:50.365 Message: 00:03:50.365 ================= 00:03:50.365 Libraries Enabled 00:03:50.365 ================= 00:03:50.365 00:03:50.365 libs: 00:03:50.365 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:50.365 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:50.365 cryptodev, dmadev, power, reorder, security, vhost, 00:03:50.365 00:03:50.365 Message: 00:03:50.365 =============== 00:03:50.365 Drivers Enabled 00:03:50.365 =============== 00:03:50.365 00:03:50.365 common: 00:03:50.365 00:03:50.365 bus: 00:03:50.365 pci, vdev, 00:03:50.365 mempool: 00:03:50.365 ring, 00:03:50.365 dma: 00:03:50.365 00:03:50.365 net: 00:03:50.365 00:03:50.365 crypto: 00:03:50.365 00:03:50.365 compress: 00:03:50.365 00:03:50.365 vdpa: 00:03:50.365 00:03:50.365 00:03:50.365 Message: 00:03:50.365 ================= 00:03:50.365 Content Skipped 00:03:50.365 ================= 00:03:50.365 00:03:50.365 apps: 00:03:50.365 dumpcap: explicitly disabled via build config 00:03:50.365 graph: explicitly disabled via build config 00:03:50.365 pdump: explicitly disabled via build config 00:03:50.365 proc-info: explicitly disabled via build config 00:03:50.365 test-acl: explicitly disabled via build config 00:03:50.365 test-bbdev: explicitly disabled via build config 00:03:50.365 test-cmdline: explicitly disabled via build config 00:03:50.365 test-compress-perf: explicitly disabled via build config 00:03:50.365 test-crypto-perf: explicitly disabled via build config 00:03:50.365 test-dma-perf: explicitly disabled via build config 00:03:50.365 test-eventdev: explicitly disabled via build config 00:03:50.365 test-fib: explicitly disabled via build config 00:03:50.365 test-flow-perf: explicitly disabled via build config 00:03:50.365 test-gpudev: explicitly disabled via build config 00:03:50.365 test-mldev: explicitly disabled via build config 00:03:50.365 test-pipeline: explicitly disabled via build config 00:03:50.365 test-pmd: explicitly disabled via build config 00:03:50.365 test-regex: explicitly disabled via build config 00:03:50.365 test-sad: explicitly disabled via build config 00:03:50.365 test-security-perf: explicitly disabled via build config 00:03:50.365 00:03:50.365 libs: 00:03:50.365 argparse: explicitly disabled via build config 00:03:50.365 metrics: explicitly disabled via build config 00:03:50.365 acl: explicitly disabled via build config 00:03:50.365 bbdev: explicitly disabled via build config 00:03:50.365 bitratestats: explicitly disabled via build config 00:03:50.365 bpf: explicitly disabled via build config 00:03:50.365 cfgfile: explicitly disabled via build config 00:03:50.365 distributor: explicitly disabled via build config 00:03:50.365 efd: explicitly disabled via build config 00:03:50.365 eventdev: explicitly disabled via build config 00:03:50.365 dispatcher: explicitly disabled via build config 00:03:50.365 gpudev: explicitly disabled via build config 00:03:50.365 gro: explicitly disabled via build config 00:03:50.365 gso: explicitly disabled via build config 00:03:50.365 ip_frag: explicitly disabled via build config 00:03:50.365 jobstats: explicitly disabled via build config 00:03:50.365 latencystats: explicitly disabled via build config 00:03:50.365 lpm: explicitly disabled via build config 00:03:50.365 member: explicitly disabled via build config 00:03:50.365 pcapng: explicitly disabled via build config 00:03:50.365 rawdev: explicitly disabled via build config 00:03:50.365 regexdev: explicitly disabled via build config 00:03:50.365 mldev: explicitly disabled via build config 00:03:50.365 rib: explicitly disabled via build config 00:03:50.365 sched: explicitly disabled via build config 00:03:50.365 stack: explicitly disabled via build config 00:03:50.365 ipsec: explicitly disabled via build config 00:03:50.365 pdcp: explicitly disabled via build config 00:03:50.365 fib: explicitly disabled via build config 00:03:50.365 port: explicitly disabled via build config 00:03:50.365 pdump: explicitly disabled via build config 00:03:50.365 table: explicitly disabled via build config 00:03:50.365 pipeline: explicitly disabled via build config 00:03:50.365 graph: explicitly disabled via build config 00:03:50.365 node: explicitly disabled via build config 00:03:50.365 00:03:50.365 drivers: 00:03:50.365 common/cpt: not in enabled drivers build config 00:03:50.365 common/dpaax: not in enabled drivers build config 00:03:50.365 common/iavf: not in enabled drivers build config 00:03:50.365 common/idpf: not in enabled drivers build config 00:03:50.365 common/ionic: not in enabled drivers build config 00:03:50.365 common/mvep: not in enabled drivers build config 00:03:50.365 common/octeontx: not in enabled drivers build config 00:03:50.365 bus/auxiliary: not in enabled drivers build config 00:03:50.365 bus/cdx: not in enabled drivers build config 00:03:50.365 bus/dpaa: not in enabled drivers build config 00:03:50.365 bus/fslmc: not in enabled drivers build config 00:03:50.365 bus/ifpga: not in enabled drivers build config 00:03:50.365 bus/platform: not in enabled drivers build config 00:03:50.365 bus/uacce: not in enabled drivers build config 00:03:50.365 bus/vmbus: not in enabled drivers build config 00:03:50.365 common/cnxk: not in enabled drivers build config 00:03:50.365 common/mlx5: not in enabled drivers build config 00:03:50.365 common/nfp: not in enabled drivers build config 00:03:50.365 common/nitrox: not in enabled drivers build config 00:03:50.365 common/qat: not in enabled drivers build config 00:03:50.365 common/sfc_efx: not in enabled drivers build config 00:03:50.365 mempool/bucket: not in enabled drivers build config 00:03:50.365 mempool/cnxk: not in enabled drivers build config 00:03:50.365 mempool/dpaa: not in enabled drivers build config 00:03:50.365 mempool/dpaa2: not in enabled drivers build config 00:03:50.365 mempool/octeontx: not in enabled drivers build config 00:03:50.365 mempool/stack: not in enabled drivers build config 00:03:50.365 dma/cnxk: not in enabled drivers build config 00:03:50.365 dma/dpaa: not in enabled drivers build config 00:03:50.365 dma/dpaa2: not in enabled drivers build config 00:03:50.365 dma/hisilicon: not in enabled drivers build config 00:03:50.365 dma/idxd: not in enabled drivers build config 00:03:50.365 dma/ioat: not in enabled drivers build config 00:03:50.365 dma/skeleton: not in enabled drivers build config 00:03:50.365 net/af_packet: not in enabled drivers build config 00:03:50.365 net/af_xdp: not in enabled drivers build config 00:03:50.365 net/ark: not in enabled drivers build config 00:03:50.365 net/atlantic: not in enabled drivers build config 00:03:50.365 net/avp: not in enabled drivers build config 00:03:50.365 net/axgbe: not in enabled drivers build config 00:03:50.365 net/bnx2x: not in enabled drivers build config 00:03:50.365 net/bnxt: not in enabled drivers build config 00:03:50.365 net/bonding: not in enabled drivers build config 00:03:50.365 net/cnxk: not in enabled drivers build config 00:03:50.365 net/cpfl: not in enabled drivers build config 00:03:50.365 net/cxgbe: not in enabled drivers build config 00:03:50.365 net/dpaa: not in enabled drivers build config 00:03:50.365 net/dpaa2: not in enabled drivers build config 00:03:50.365 net/e1000: not in enabled drivers build config 00:03:50.365 net/ena: not in enabled drivers build config 00:03:50.365 net/enetc: not in enabled drivers build config 00:03:50.365 net/enetfec: not in enabled drivers build config 00:03:50.365 net/enic: not in enabled drivers build config 00:03:50.365 net/failsafe: not in enabled drivers build config 00:03:50.365 net/fm10k: not in enabled drivers build config 00:03:50.365 net/gve: not in enabled drivers build config 00:03:50.365 net/hinic: not in enabled drivers build config 00:03:50.365 net/hns3: not in enabled drivers build config 00:03:50.365 net/i40e: not in enabled drivers build config 00:03:50.365 net/iavf: not in enabled drivers build config 00:03:50.365 net/ice: not in enabled drivers build config 00:03:50.365 net/idpf: not in enabled drivers build config 00:03:50.365 net/igc: not in enabled drivers build config 00:03:50.365 net/ionic: not in enabled drivers build config 00:03:50.365 net/ipn3ke: not in enabled drivers build config 00:03:50.365 net/ixgbe: not in enabled drivers build config 00:03:50.365 net/mana: not in enabled drivers build config 00:03:50.365 net/memif: not in enabled drivers build config 00:03:50.365 net/mlx4: not in enabled drivers build config 00:03:50.365 net/mlx5: not in enabled drivers build config 00:03:50.365 net/mvneta: not in enabled drivers build config 00:03:50.365 net/mvpp2: not in enabled drivers build config 00:03:50.365 net/netvsc: not in enabled drivers build config 00:03:50.365 net/nfb: not in enabled drivers build config 00:03:50.366 net/nfp: not in enabled drivers build config 00:03:50.366 net/ngbe: not in enabled drivers build config 00:03:50.366 net/null: not in enabled drivers build config 00:03:50.366 net/octeontx: not in enabled drivers build config 00:03:50.366 net/octeon_ep: not in enabled drivers build config 00:03:50.366 net/pcap: not in enabled drivers build config 00:03:50.366 net/pfe: not in enabled drivers build config 00:03:50.366 net/qede: not in enabled drivers build config 00:03:50.366 net/ring: not in enabled drivers build config 00:03:50.366 net/sfc: not in enabled drivers build config 00:03:50.366 net/softnic: not in enabled drivers build config 00:03:50.366 net/tap: not in enabled drivers build config 00:03:50.366 net/thunderx: not in enabled drivers build config 00:03:50.366 net/txgbe: not in enabled drivers build config 00:03:50.366 net/vdev_netvsc: not in enabled drivers build config 00:03:50.366 net/vhost: not in enabled drivers build config 00:03:50.366 net/virtio: not in enabled drivers build config 00:03:50.366 net/vmxnet3: not in enabled drivers build config 00:03:50.366 raw/*: missing internal dependency, "rawdev" 00:03:50.366 crypto/armv8: not in enabled drivers build config 00:03:50.366 crypto/bcmfs: not in enabled drivers build config 00:03:50.366 crypto/caam_jr: not in enabled drivers build config 00:03:50.366 crypto/ccp: not in enabled drivers build config 00:03:50.366 crypto/cnxk: not in enabled drivers build config 00:03:50.366 crypto/dpaa_sec: not in enabled drivers build config 00:03:50.366 crypto/dpaa2_sec: not in enabled drivers build config 00:03:50.366 crypto/ipsec_mb: not in enabled drivers build config 00:03:50.366 crypto/mlx5: not in enabled drivers build config 00:03:50.366 crypto/mvsam: not in enabled drivers build config 00:03:50.366 crypto/nitrox: not in enabled drivers build config 00:03:50.366 crypto/null: not in enabled drivers build config 00:03:50.366 crypto/octeontx: not in enabled drivers build config 00:03:50.366 crypto/openssl: not in enabled drivers build config 00:03:50.366 crypto/scheduler: not in enabled drivers build config 00:03:50.366 crypto/uadk: not in enabled drivers build config 00:03:50.366 crypto/virtio: not in enabled drivers build config 00:03:50.366 compress/isal: not in enabled drivers build config 00:03:50.366 compress/mlx5: not in enabled drivers build config 00:03:50.366 compress/nitrox: not in enabled drivers build config 00:03:50.366 compress/octeontx: not in enabled drivers build config 00:03:50.366 compress/zlib: not in enabled drivers build config 00:03:50.366 regex/*: missing internal dependency, "regexdev" 00:03:50.366 ml/*: missing internal dependency, "mldev" 00:03:50.366 vdpa/ifc: not in enabled drivers build config 00:03:50.366 vdpa/mlx5: not in enabled drivers build config 00:03:50.366 vdpa/nfp: not in enabled drivers build config 00:03:50.366 vdpa/sfc: not in enabled drivers build config 00:03:50.366 event/*: missing internal dependency, "eventdev" 00:03:50.366 baseband/*: missing internal dependency, "bbdev" 00:03:50.366 gpu/*: missing internal dependency, "gpudev" 00:03:50.366 00:03:50.366 00:03:50.366 Build targets in project: 85 00:03:50.366 00:03:50.366 DPDK 24.03.0 00:03:50.366 00:03:50.366 User defined options 00:03:50.366 buildtype : debug 00:03:50.366 default_library : shared 00:03:50.366 libdir : lib 00:03:50.366 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:50.366 b_sanitize : address 00:03:50.366 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:50.366 c_link_args : 00:03:50.366 cpu_instruction_set: native 00:03:50.366 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:50.366 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:50.366 enable_docs : false 00:03:50.366 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:50.366 enable_kmods : false 00:03:50.366 max_lcores : 128 00:03:50.366 tests : false 00:03:50.366 00:03:50.366 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:50.934 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:50.934 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:50.934 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:50.934 [3/268] Linking static target lib/librte_kvargs.a 00:03:50.934 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:50.934 [5/268] Linking static target lib/librte_log.a 00:03:51.193 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:51.452 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.452 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:51.452 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:51.452 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:51.452 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:51.710 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:51.710 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:51.710 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:51.710 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:51.710 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:51.710 [17/268] Linking static target lib/librte_telemetry.a 00:03:51.968 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:51.968 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:52.226 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.226 [21/268] Linking target lib/librte_log.so.24.1 00:03:52.226 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:52.226 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:52.226 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:52.485 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:52.485 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:52.485 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:52.485 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:52.485 [29/268] Linking target lib/librte_kvargs.so.24.1 00:03:52.485 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:52.743 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:52.743 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.743 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:52.743 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:52.743 [35/268] Linking target lib/librte_telemetry.so.24.1 00:03:52.743 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:53.002 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:53.002 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:53.002 [39/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:53.002 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:53.002 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:53.261 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:53.261 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:53.261 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:53.261 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:53.520 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:53.520 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:53.520 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:53.520 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:53.778 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:53.778 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:53.778 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:53.778 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:53.778 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:54.037 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:54.037 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:54.037 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:54.037 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:54.295 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:54.295 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:54.295 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:54.295 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:54.295 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:54.295 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:54.554 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:54.554 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:54.554 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:54.813 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:54.813 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:54.813 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:55.072 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:55.072 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:55.072 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:55.072 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:55.072 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:55.072 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:55.072 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:55.072 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:55.072 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:55.352 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:55.352 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:55.352 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:55.352 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:55.611 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:55.611 [85/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:55.611 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:55.611 [87/268] Linking static target lib/librte_rcu.a 00:03:55.870 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:55.870 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:55.870 [90/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:55.870 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:55.870 [92/268] Linking static target lib/librte_ring.a 00:03:55.870 [93/268] Linking static target lib/librte_eal.a 00:03:55.870 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:55.870 [95/268] Linking static target lib/librte_mempool.a 00:03:55.870 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:56.129 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:56.129 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.388 [99/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.388 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:56.388 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:56.388 [102/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:56.647 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:56.647 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:56.647 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:56.647 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:56.647 [107/268] Linking static target lib/librte_meter.a 00:03:56.906 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:56.906 [109/268] Linking static target lib/librte_net.a 00:03:56.906 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:56.906 [111/268] Linking static target lib/librte_mbuf.a 00:03:56.906 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:56.906 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:57.165 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.165 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:57.165 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:57.165 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.165 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.424 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:57.683 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:57.942 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:57.942 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:57.942 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.201 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:58.201 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:58.201 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:58.201 [127/268] Linking static target lib/librte_pci.a 00:03:58.201 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:58.459 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:58.459 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:58.459 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:58.459 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:58.718 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:58.718 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:58.718 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:58.718 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.718 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:58.718 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:58.718 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:58.718 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:58.718 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:58.718 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:58.718 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:58.977 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:58.977 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:58.977 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:59.236 [147/268] Linking static target lib/librte_cmdline.a 00:03:59.236 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:59.236 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:59.236 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:59.495 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:59.495 [152/268] Linking static target lib/librte_timer.a 00:03:59.495 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:59.495 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:59.755 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:59.755 [156/268] Linking static target lib/librte_ethdev.a 00:03:59.755 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:59.755 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:59.755 [159/268] Linking static target lib/librte_compressdev.a 00:03:59.755 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:00.015 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:00.015 [162/268] Linking static target lib/librte_hash.a 00:04:00.274 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:00.274 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:00.274 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:00.274 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:00.274 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:00.532 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:00.532 [169/268] Linking static target lib/librte_dmadev.a 00:04:00.532 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:00.791 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:00.791 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:00.791 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:00.791 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:00.791 [175/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.051 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:01.310 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.310 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:01.310 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:01.310 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.310 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:01.570 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:01.570 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:01.570 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:01.570 [185/268] Linking static target lib/librte_cryptodev.a 00:04:01.570 [186/268] Linking static target lib/librte_power.a 00:04:01.830 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:01.830 [188/268] Linking static target lib/librte_reorder.a 00:04:01.830 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:02.092 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:02.092 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:02.092 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:02.092 [193/268] Linking static target lib/librte_security.a 00:04:02.351 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:02.611 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:02.869 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:02.869 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.129 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:03.129 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:03.129 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:03.129 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:03.389 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:03.389 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:03.389 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:03.647 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:03.647 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:03.906 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:03.906 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:03.906 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:03.906 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:04.165 [211/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:04.165 [212/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:04.165 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:04.165 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:04.165 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:04.165 [216/268] Linking static target drivers/librte_bus_vdev.a 00:04:04.165 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:04.165 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:04.165 [219/268] Linking static target drivers/librte_bus_pci.a 00:04:04.165 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:04.165 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:04.425 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:04.425 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:04.425 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:04.425 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:04.425 [226/268] Linking static target drivers/librte_mempool_ring.a 00:04:04.684 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.622 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:07.527 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.527 [230/268] Linking target lib/librte_eal.so.24.1 00:04:07.527 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:07.787 [232/268] Linking target lib/librte_meter.so.24.1 00:04:07.787 [233/268] Linking target lib/librte_ring.so.24.1 00:04:07.787 [234/268] Linking target lib/librte_pci.so.24.1 00:04:07.787 [235/268] Linking target lib/librte_dmadev.so.24.1 00:04:07.787 [236/268] Linking target lib/librte_timer.so.24.1 00:04:07.787 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:07.787 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:07.787 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:07.787 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:07.787 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:07.787 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:07.787 [243/268] Linking target lib/librte_rcu.so.24.1 00:04:07.787 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:07.787 [245/268] Linking target lib/librte_mempool.so.24.1 00:04:08.047 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:08.047 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:08.047 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:08.047 [249/268] Linking target lib/librte_mbuf.so.24.1 00:04:08.306 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:08.306 [251/268] Linking target lib/librte_cryptodev.so.24.1 00:04:08.306 [252/268] Linking target lib/librte_reorder.so.24.1 00:04:08.306 [253/268] Linking target lib/librte_compressdev.so.24.1 00:04:08.306 [254/268] Linking target lib/librte_net.so.24.1 00:04:08.306 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:08.306 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:08.565 [257/268] Linking target lib/librte_cmdline.so.24.1 00:04:08.565 [258/268] Linking target lib/librte_security.so.24.1 00:04:08.565 [259/268] Linking target lib/librte_hash.so.24.1 00:04:08.565 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:08.825 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.825 [262/268] Linking target lib/librte_ethdev.so.24.1 00:04:09.085 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:09.085 [264/268] Linking target lib/librte_power.so.24.1 00:04:10.994 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:10.994 [266/268] Linking static target lib/librte_vhost.a 00:04:13.531 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.531 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:13.531 INFO: autodetecting backend as ninja 00:04:13.531 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:35.477 CC lib/log/log.o 00:04:35.477 CC lib/log/log_flags.o 00:04:35.477 CC lib/log/log_deprecated.o 00:04:35.477 CC lib/ut/ut.o 00:04:35.477 CC lib/ut_mock/mock.o 00:04:35.477 LIB libspdk_ut.a 00:04:35.477 LIB libspdk_log.a 00:04:35.477 LIB libspdk_ut_mock.a 00:04:35.477 SO libspdk_ut.so.2.0 00:04:35.477 SO libspdk_ut_mock.so.6.0 00:04:35.477 SO libspdk_log.so.7.1 00:04:35.477 SYMLINK libspdk_ut.so 00:04:35.477 SYMLINK libspdk_ut_mock.so 00:04:35.477 SYMLINK libspdk_log.so 00:04:35.477 CC lib/util/bit_array.o 00:04:35.477 CC lib/util/cpuset.o 00:04:35.477 CC lib/ioat/ioat.o 00:04:35.477 CC lib/util/base64.o 00:04:35.477 CC lib/util/crc32.o 00:04:35.477 CC lib/util/crc32c.o 00:04:35.478 CC lib/util/crc16.o 00:04:35.478 CXX lib/trace_parser/trace.o 00:04:35.478 CC lib/dma/dma.o 00:04:35.478 CC lib/vfio_user/host/vfio_user_pci.o 00:04:35.478 CC lib/vfio_user/host/vfio_user.o 00:04:35.478 CC lib/util/crc32_ieee.o 00:04:35.478 CC lib/util/crc64.o 00:04:35.478 CC lib/util/dif.o 00:04:35.478 CC lib/util/fd.o 00:04:35.478 CC lib/util/fd_group.o 00:04:35.478 LIB libspdk_dma.a 00:04:35.478 SO libspdk_dma.so.5.0 00:04:35.478 CC lib/util/file.o 00:04:35.478 CC lib/util/hexlify.o 00:04:35.478 SYMLINK libspdk_dma.so 00:04:35.478 CC lib/util/iov.o 00:04:35.478 LIB libspdk_ioat.a 00:04:35.478 CC lib/util/math.o 00:04:35.478 SO libspdk_ioat.so.7.0 00:04:35.478 CC lib/util/net.o 00:04:35.478 LIB libspdk_vfio_user.a 00:04:35.478 SYMLINK libspdk_ioat.so 00:04:35.478 CC lib/util/pipe.o 00:04:35.478 SO libspdk_vfio_user.so.5.0 00:04:35.478 CC lib/util/strerror_tls.o 00:04:35.478 CC lib/util/string.o 00:04:35.478 CC lib/util/uuid.o 00:04:35.478 SYMLINK libspdk_vfio_user.so 00:04:35.478 CC lib/util/xor.o 00:04:35.478 CC lib/util/zipf.o 00:04:35.478 CC lib/util/md5.o 00:04:35.478 LIB libspdk_util.a 00:04:35.478 SO libspdk_util.so.10.1 00:04:35.478 LIB libspdk_trace_parser.a 00:04:35.478 SO libspdk_trace_parser.so.6.0 00:04:35.478 SYMLINK libspdk_trace_parser.so 00:04:35.478 SYMLINK libspdk_util.so 00:04:35.478 CC lib/conf/conf.o 00:04:35.478 CC lib/rdma_utils/rdma_utils.o 00:04:35.478 CC lib/vmd/vmd.o 00:04:35.478 CC lib/vmd/led.o 00:04:35.478 CC lib/json/json_util.o 00:04:35.478 CC lib/json/json_parse.o 00:04:35.478 CC lib/json/json_write.o 00:04:35.478 CC lib/env_dpdk/memory.o 00:04:35.478 CC lib/env_dpdk/env.o 00:04:35.478 CC lib/idxd/idxd.o 00:04:35.478 CC lib/idxd/idxd_user.o 00:04:35.478 LIB libspdk_conf.a 00:04:35.478 SO libspdk_conf.so.6.0 00:04:35.478 CC lib/env_dpdk/pci.o 00:04:35.478 CC lib/env_dpdk/init.o 00:04:35.478 LIB libspdk_rdma_utils.a 00:04:35.478 SYMLINK libspdk_conf.so 00:04:35.478 CC lib/idxd/idxd_kernel.o 00:04:35.478 SO libspdk_rdma_utils.so.1.0 00:04:35.478 LIB libspdk_json.a 00:04:35.478 SO libspdk_json.so.6.0 00:04:35.478 SYMLINK libspdk_rdma_utils.so 00:04:35.478 CC lib/env_dpdk/threads.o 00:04:35.478 SYMLINK libspdk_json.so 00:04:35.478 CC lib/env_dpdk/pci_ioat.o 00:04:35.478 CC lib/env_dpdk/pci_virtio.o 00:04:35.478 CC lib/env_dpdk/pci_vmd.o 00:04:35.478 CC lib/env_dpdk/pci_idxd.o 00:04:35.478 CC lib/env_dpdk/pci_event.o 00:04:35.478 CC lib/rdma_provider/common.o 00:04:35.478 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:35.738 CC lib/env_dpdk/sigbus_handler.o 00:04:35.738 CC lib/env_dpdk/pci_dpdk.o 00:04:35.738 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:35.738 LIB libspdk_vmd.a 00:04:35.738 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:35.738 LIB libspdk_idxd.a 00:04:35.738 SO libspdk_vmd.so.6.0 00:04:35.738 SO libspdk_idxd.so.12.1 00:04:35.738 SYMLINK libspdk_vmd.so 00:04:35.738 LIB libspdk_rdma_provider.a 00:04:35.738 SO libspdk_rdma_provider.so.7.0 00:04:35.738 SYMLINK libspdk_idxd.so 00:04:35.738 CC lib/jsonrpc/jsonrpc_server.o 00:04:35.738 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:35.738 CC lib/jsonrpc/jsonrpc_client.o 00:04:35.738 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:35.998 SYMLINK libspdk_rdma_provider.so 00:04:36.257 LIB libspdk_jsonrpc.a 00:04:36.257 SO libspdk_jsonrpc.so.6.0 00:04:36.257 SYMLINK libspdk_jsonrpc.so 00:04:36.826 CC lib/rpc/rpc.o 00:04:36.826 LIB libspdk_env_dpdk.a 00:04:36.826 SO libspdk_env_dpdk.so.15.1 00:04:36.826 LIB libspdk_rpc.a 00:04:37.084 SO libspdk_rpc.so.6.0 00:04:37.084 SYMLINK libspdk_env_dpdk.so 00:04:37.084 SYMLINK libspdk_rpc.so 00:04:37.342 CC lib/keyring/keyring.o 00:04:37.342 CC lib/keyring/keyring_rpc.o 00:04:37.342 CC lib/notify/notify.o 00:04:37.342 CC lib/notify/notify_rpc.o 00:04:37.342 CC lib/trace/trace.o 00:04:37.342 CC lib/trace/trace_flags.o 00:04:37.342 CC lib/trace/trace_rpc.o 00:04:37.601 LIB libspdk_notify.a 00:04:37.601 SO libspdk_notify.so.6.0 00:04:37.601 LIB libspdk_keyring.a 00:04:37.601 SO libspdk_keyring.so.2.0 00:04:37.601 LIB libspdk_trace.a 00:04:37.859 SYMLINK libspdk_notify.so 00:04:37.859 SYMLINK libspdk_keyring.so 00:04:37.859 SO libspdk_trace.so.11.0 00:04:37.859 SYMLINK libspdk_trace.so 00:04:38.426 CC lib/thread/iobuf.o 00:04:38.426 CC lib/thread/thread.o 00:04:38.426 CC lib/sock/sock.o 00:04:38.426 CC lib/sock/sock_rpc.o 00:04:38.685 LIB libspdk_sock.a 00:04:38.943 SO libspdk_sock.so.10.0 00:04:38.943 SYMLINK libspdk_sock.so 00:04:39.508 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:39.508 CC lib/nvme/nvme_ctrlr.o 00:04:39.508 CC lib/nvme/nvme_fabric.o 00:04:39.508 CC lib/nvme/nvme_ns.o 00:04:39.509 CC lib/nvme/nvme_ns_cmd.o 00:04:39.509 CC lib/nvme/nvme_pcie_common.o 00:04:39.509 CC lib/nvme/nvme_pcie.o 00:04:39.509 CC lib/nvme/nvme.o 00:04:39.509 CC lib/nvme/nvme_qpair.o 00:04:40.074 CC lib/nvme/nvme_quirks.o 00:04:40.074 LIB libspdk_thread.a 00:04:40.074 CC lib/nvme/nvme_transport.o 00:04:40.075 SO libspdk_thread.so.11.0 00:04:40.334 CC lib/nvme/nvme_discovery.o 00:04:40.334 SYMLINK libspdk_thread.so 00:04:40.334 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:40.334 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:40.334 CC lib/nvme/nvme_tcp.o 00:04:40.334 CC lib/nvme/nvme_opal.o 00:04:40.334 CC lib/nvme/nvme_io_msg.o 00:04:40.593 CC lib/nvme/nvme_poll_group.o 00:04:40.593 CC lib/nvme/nvme_zns.o 00:04:40.852 CC lib/nvme/nvme_stubs.o 00:04:40.852 CC lib/nvme/nvme_auth.o 00:04:40.852 CC lib/nvme/nvme_cuse.o 00:04:40.852 CC lib/nvme/nvme_rdma.o 00:04:41.111 CC lib/accel/accel.o 00:04:41.371 CC lib/blob/blobstore.o 00:04:41.371 CC lib/accel/accel_rpc.o 00:04:41.371 CC lib/init/json_config.o 00:04:41.371 CC lib/init/subsystem.o 00:04:41.371 CC lib/init/subsystem_rpc.o 00:04:41.629 CC lib/init/rpc.o 00:04:41.629 CC lib/accel/accel_sw.o 00:04:41.629 LIB libspdk_init.a 00:04:41.629 SO libspdk_init.so.6.0 00:04:41.629 CC lib/virtio/virtio.o 00:04:41.890 SYMLINK libspdk_init.so 00:04:41.890 CC lib/virtio/virtio_vhost_user.o 00:04:41.890 CC lib/virtio/virtio_vfio_user.o 00:04:41.890 CC lib/blob/request.o 00:04:42.153 CC lib/fsdev/fsdev.o 00:04:42.153 CC lib/virtio/virtio_pci.o 00:04:42.153 CC lib/blob/zeroes.o 00:04:42.153 CC lib/blob/blob_bs_dev.o 00:04:42.153 CC lib/fsdev/fsdev_io.o 00:04:42.153 CC lib/event/app.o 00:04:42.417 CC lib/event/reactor.o 00:04:42.417 CC lib/event/log_rpc.o 00:04:42.417 CC lib/fsdev/fsdev_rpc.o 00:04:42.417 LIB libspdk_virtio.a 00:04:42.417 LIB libspdk_accel.a 00:04:42.417 SO libspdk_virtio.so.7.0 00:04:42.417 SO libspdk_accel.so.16.0 00:04:42.417 CC lib/event/app_rpc.o 00:04:42.675 CC lib/event/scheduler_static.o 00:04:42.675 SYMLINK libspdk_accel.so 00:04:42.675 SYMLINK libspdk_virtio.so 00:04:42.675 LIB libspdk_nvme.a 00:04:42.675 SO libspdk_nvme.so.15.0 00:04:42.675 CC lib/bdev/bdev.o 00:04:42.675 CC lib/bdev/bdev_rpc.o 00:04:42.675 CC lib/bdev/bdev_zone.o 00:04:42.675 CC lib/bdev/part.o 00:04:42.933 CC lib/bdev/scsi_nvme.o 00:04:42.933 LIB libspdk_fsdev.a 00:04:42.933 LIB libspdk_event.a 00:04:42.933 SO libspdk_event.so.14.0 00:04:42.933 SO libspdk_fsdev.so.2.0 00:04:42.933 SYMLINK libspdk_event.so 00:04:42.933 SYMLINK libspdk_fsdev.so 00:04:43.192 SYMLINK libspdk_nvme.so 00:04:43.192 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:44.132 LIB libspdk_fuse_dispatcher.a 00:04:44.132 SO libspdk_fuse_dispatcher.so.1.0 00:04:44.390 SYMLINK libspdk_fuse_dispatcher.so 00:04:45.770 LIB libspdk_blob.a 00:04:45.770 SO libspdk_blob.so.12.0 00:04:45.770 SYMLINK libspdk_blob.so 00:04:46.028 LIB libspdk_bdev.a 00:04:46.287 CC lib/lvol/lvol.o 00:04:46.287 SO libspdk_bdev.so.17.0 00:04:46.287 CC lib/blobfs/tree.o 00:04:46.287 CC lib/blobfs/blobfs.o 00:04:46.287 SYMLINK libspdk_bdev.so 00:04:46.547 CC lib/scsi/dev.o 00:04:46.547 CC lib/scsi/lun.o 00:04:46.547 CC lib/scsi/scsi.o 00:04:46.547 CC lib/scsi/port.o 00:04:46.547 CC lib/nbd/nbd.o 00:04:46.547 CC lib/nvmf/ctrlr.o 00:04:46.547 CC lib/ftl/ftl_core.o 00:04:46.547 CC lib/ublk/ublk.o 00:04:46.547 CC lib/ublk/ublk_rpc.o 00:04:46.807 CC lib/scsi/scsi_bdev.o 00:04:46.807 CC lib/nvmf/ctrlr_discovery.o 00:04:46.807 CC lib/nvmf/ctrlr_bdev.o 00:04:46.807 CC lib/nvmf/subsystem.o 00:04:47.067 CC lib/ftl/ftl_init.o 00:04:47.067 CC lib/nbd/nbd_rpc.o 00:04:47.067 LIB libspdk_blobfs.a 00:04:47.326 SO libspdk_blobfs.so.11.0 00:04:47.326 LIB libspdk_nbd.a 00:04:47.326 CC lib/ftl/ftl_layout.o 00:04:47.326 CC lib/scsi/scsi_pr.o 00:04:47.326 LIB libspdk_lvol.a 00:04:47.326 SO libspdk_nbd.so.7.0 00:04:47.326 SYMLINK libspdk_blobfs.so 00:04:47.326 SO libspdk_lvol.so.11.0 00:04:47.326 LIB libspdk_ublk.a 00:04:47.326 CC lib/nvmf/nvmf.o 00:04:47.326 SO libspdk_ublk.so.3.0 00:04:47.326 SYMLINK libspdk_nbd.so 00:04:47.326 CC lib/ftl/ftl_debug.o 00:04:47.326 SYMLINK libspdk_lvol.so 00:04:47.326 CC lib/scsi/scsi_rpc.o 00:04:47.326 CC lib/ftl/ftl_io.o 00:04:47.326 SYMLINK libspdk_ublk.so 00:04:47.326 CC lib/nvmf/nvmf_rpc.o 00:04:47.586 CC lib/scsi/task.o 00:04:47.586 CC lib/nvmf/transport.o 00:04:47.586 CC lib/nvmf/tcp.o 00:04:47.586 CC lib/nvmf/stubs.o 00:04:47.586 CC lib/nvmf/mdns_server.o 00:04:47.586 CC lib/ftl/ftl_sb.o 00:04:47.846 LIB libspdk_scsi.a 00:04:47.846 SO libspdk_scsi.so.9.0 00:04:47.846 CC lib/ftl/ftl_l2p.o 00:04:47.846 SYMLINK libspdk_scsi.so 00:04:47.846 CC lib/ftl/ftl_l2p_flat.o 00:04:48.105 CC lib/ftl/ftl_nv_cache.o 00:04:48.105 CC lib/nvmf/rdma.o 00:04:48.105 CC lib/ftl/ftl_band.o 00:04:48.105 CC lib/ftl/ftl_band_ops.o 00:04:48.365 CC lib/nvmf/auth.o 00:04:48.365 CC lib/ftl/ftl_writer.o 00:04:48.365 CC lib/ftl/ftl_rq.o 00:04:48.365 CC lib/ftl/ftl_reloc.o 00:04:48.624 CC lib/ftl/ftl_l2p_cache.o 00:04:48.624 CC lib/ftl/ftl_p2l.o 00:04:48.624 CC lib/ftl/ftl_p2l_log.o 00:04:48.624 CC lib/iscsi/conn.o 00:04:48.624 CC lib/vhost/vhost.o 00:04:48.883 CC lib/ftl/mngt/ftl_mngt.o 00:04:48.883 CC lib/iscsi/init_grp.o 00:04:48.883 CC lib/iscsi/iscsi.o 00:04:49.144 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:49.144 CC lib/iscsi/param.o 00:04:49.144 CC lib/vhost/vhost_rpc.o 00:04:49.144 CC lib/iscsi/portal_grp.o 00:04:49.144 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:49.403 CC lib/vhost/vhost_scsi.o 00:04:49.403 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:49.403 CC lib/iscsi/tgt_node.o 00:04:49.403 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:49.403 CC lib/vhost/vhost_blk.o 00:04:49.689 CC lib/vhost/rte_vhost_user.o 00:04:49.689 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:49.689 CC lib/iscsi/iscsi_subsystem.o 00:04:49.965 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:49.965 CC lib/iscsi/iscsi_rpc.o 00:04:49.966 CC lib/iscsi/task.o 00:04:49.966 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:50.224 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:50.224 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:50.224 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:50.224 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:50.482 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:50.482 CC lib/ftl/utils/ftl_conf.o 00:04:50.482 CC lib/ftl/utils/ftl_md.o 00:04:50.482 CC lib/ftl/utils/ftl_mempool.o 00:04:50.482 CC lib/ftl/utils/ftl_bitmap.o 00:04:50.482 CC lib/ftl/utils/ftl_property.o 00:04:50.482 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:50.740 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:50.741 LIB libspdk_iscsi.a 00:04:50.741 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:50.741 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:50.741 LIB libspdk_nvmf.a 00:04:50.741 SO libspdk_iscsi.so.8.0 00:04:50.741 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:50.999 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:50.999 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:50.999 LIB libspdk_vhost.a 00:04:50.999 SO libspdk_nvmf.so.20.0 00:04:50.999 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:50.999 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:50.999 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:50.999 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:50.999 SO libspdk_vhost.so.8.0 00:04:50.999 SYMLINK libspdk_iscsi.so 00:04:51.000 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:51.000 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:51.000 CC lib/ftl/base/ftl_base_dev.o 00:04:51.315 CC lib/ftl/base/ftl_base_bdev.o 00:04:51.315 SYMLINK libspdk_vhost.so 00:04:51.315 CC lib/ftl/ftl_trace.o 00:04:51.315 SYMLINK libspdk_nvmf.so 00:04:51.572 LIB libspdk_ftl.a 00:04:51.830 SO libspdk_ftl.so.9.0 00:04:52.088 SYMLINK libspdk_ftl.so 00:04:52.347 CC module/env_dpdk/env_dpdk_rpc.o 00:04:52.606 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:52.606 CC module/scheduler/gscheduler/gscheduler.o 00:04:52.606 CC module/blob/bdev/blob_bdev.o 00:04:52.606 CC module/fsdev/aio/fsdev_aio.o 00:04:52.606 CC module/accel/error/accel_error.o 00:04:52.606 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:52.606 CC module/keyring/file/keyring.o 00:04:52.606 CC module/accel/ioat/accel_ioat.o 00:04:52.606 CC module/sock/posix/posix.o 00:04:52.606 LIB libspdk_env_dpdk_rpc.a 00:04:52.606 SO libspdk_env_dpdk_rpc.so.6.0 00:04:52.865 LIB libspdk_scheduler_gscheduler.a 00:04:52.865 LIB libspdk_scheduler_dpdk_governor.a 00:04:52.865 CC module/keyring/file/keyring_rpc.o 00:04:52.865 SO libspdk_scheduler_gscheduler.so.4.0 00:04:52.865 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:52.865 SYMLINK libspdk_env_dpdk_rpc.so 00:04:52.865 CC module/accel/ioat/accel_ioat_rpc.o 00:04:52.865 CC module/accel/error/accel_error_rpc.o 00:04:52.865 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:52.865 LIB libspdk_scheduler_dynamic.a 00:04:52.865 SYMLINK libspdk_scheduler_gscheduler.so 00:04:52.865 CC module/fsdev/aio/linux_aio_mgr.o 00:04:52.865 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:52.865 LIB libspdk_blob_bdev.a 00:04:52.865 SO libspdk_scheduler_dynamic.so.4.0 00:04:52.865 LIB libspdk_keyring_file.a 00:04:52.865 SO libspdk_blob_bdev.so.12.0 00:04:52.865 SO libspdk_keyring_file.so.2.0 00:04:52.865 LIB libspdk_accel_ioat.a 00:04:52.865 SYMLINK libspdk_scheduler_dynamic.so 00:04:52.865 SYMLINK libspdk_blob_bdev.so 00:04:53.123 SO libspdk_accel_ioat.so.6.0 00:04:53.123 SYMLINK libspdk_keyring_file.so 00:04:53.123 LIB libspdk_accel_error.a 00:04:53.123 SO libspdk_accel_error.so.2.0 00:04:53.123 CC module/accel/dsa/accel_dsa.o 00:04:53.123 SYMLINK libspdk_accel_ioat.so 00:04:53.123 CC module/accel/dsa/accel_dsa_rpc.o 00:04:53.123 SYMLINK libspdk_accel_error.so 00:04:53.123 CC module/keyring/linux/keyring.o 00:04:53.123 CC module/accel/iaa/accel_iaa.o 00:04:53.123 CC module/accel/iaa/accel_iaa_rpc.o 00:04:53.383 CC module/blobfs/bdev/blobfs_bdev.o 00:04:53.383 CC module/bdev/delay/vbdev_delay.o 00:04:53.383 CC module/bdev/error/vbdev_error.o 00:04:53.383 CC module/bdev/gpt/gpt.o 00:04:53.383 CC module/keyring/linux/keyring_rpc.o 00:04:53.383 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:53.383 LIB libspdk_accel_iaa.a 00:04:53.383 LIB libspdk_accel_dsa.a 00:04:53.383 LIB libspdk_fsdev_aio.a 00:04:53.383 SO libspdk_accel_iaa.so.3.0 00:04:53.383 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:53.383 SO libspdk_accel_dsa.so.5.0 00:04:53.383 LIB libspdk_keyring_linux.a 00:04:53.383 SO libspdk_fsdev_aio.so.1.0 00:04:53.383 SO libspdk_keyring_linux.so.1.0 00:04:53.383 LIB libspdk_sock_posix.a 00:04:53.639 CC module/bdev/gpt/vbdev_gpt.o 00:04:53.639 SYMLINK libspdk_accel_iaa.so 00:04:53.639 SYMLINK libspdk_accel_dsa.so 00:04:53.639 CC module/bdev/error/vbdev_error_rpc.o 00:04:53.639 SYMLINK libspdk_fsdev_aio.so 00:04:53.639 SO libspdk_sock_posix.so.6.0 00:04:53.639 SYMLINK libspdk_keyring_linux.so 00:04:53.639 SYMLINK libspdk_sock_posix.so 00:04:53.639 LIB libspdk_blobfs_bdev.a 00:04:53.639 SO libspdk_blobfs_bdev.so.6.0 00:04:53.639 LIB libspdk_bdev_error.a 00:04:53.639 CC module/bdev/null/bdev_null.o 00:04:53.639 CC module/bdev/lvol/vbdev_lvol.o 00:04:53.639 LIB libspdk_bdev_delay.a 00:04:53.898 SYMLINK libspdk_blobfs_bdev.so 00:04:53.898 CC module/bdev/malloc/bdev_malloc.o 00:04:53.898 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:53.898 SO libspdk_bdev_error.so.6.0 00:04:53.898 CC module/bdev/nvme/bdev_nvme.o 00:04:53.898 SO libspdk_bdev_delay.so.6.0 00:04:53.898 LIB libspdk_bdev_gpt.a 00:04:53.898 CC module/bdev/passthru/vbdev_passthru.o 00:04:53.898 CC module/bdev/raid/bdev_raid.o 00:04:53.898 SYMLINK libspdk_bdev_error.so 00:04:53.898 SO libspdk_bdev_gpt.so.6.0 00:04:53.898 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:53.898 SYMLINK libspdk_bdev_delay.so 00:04:53.898 CC module/bdev/nvme/nvme_rpc.o 00:04:53.898 SYMLINK libspdk_bdev_gpt.so 00:04:53.898 CC module/bdev/nvme/bdev_mdns_client.o 00:04:53.898 CC module/bdev/raid/bdev_raid_rpc.o 00:04:54.156 CC module/bdev/null/bdev_null_rpc.o 00:04:54.156 CC module/bdev/raid/bdev_raid_sb.o 00:04:54.156 CC module/bdev/raid/raid0.o 00:04:54.156 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:54.156 LIB libspdk_bdev_null.a 00:04:54.156 LIB libspdk_bdev_malloc.a 00:04:54.156 SO libspdk_bdev_null.so.6.0 00:04:54.156 SO libspdk_bdev_malloc.so.6.0 00:04:54.414 SYMLINK libspdk_bdev_null.so 00:04:54.414 SYMLINK libspdk_bdev_malloc.so 00:04:54.414 LIB libspdk_bdev_passthru.a 00:04:54.414 SO libspdk_bdev_passthru.so.6.0 00:04:54.414 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:54.414 CC module/bdev/split/vbdev_split.o 00:04:54.414 CC module/bdev/raid/raid1.o 00:04:54.414 SYMLINK libspdk_bdev_passthru.so 00:04:54.414 CC module/bdev/raid/concat.o 00:04:54.414 CC module/bdev/aio/bdev_aio.o 00:04:54.414 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:54.672 CC module/bdev/ftl/bdev_ftl.o 00:04:54.672 CC module/bdev/split/vbdev_split_rpc.o 00:04:54.672 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:54.672 CC module/bdev/aio/bdev_aio_rpc.o 00:04:54.672 CC module/bdev/raid/raid5f.o 00:04:54.929 LIB libspdk_bdev_lvol.a 00:04:54.929 LIB libspdk_bdev_split.a 00:04:54.929 SO libspdk_bdev_lvol.so.6.0 00:04:54.929 SO libspdk_bdev_split.so.6.0 00:04:54.929 LIB libspdk_bdev_zone_block.a 00:04:54.929 LIB libspdk_bdev_aio.a 00:04:54.929 SYMLINK libspdk_bdev_lvol.so 00:04:54.929 SO libspdk_bdev_zone_block.so.6.0 00:04:54.929 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:54.929 SO libspdk_bdev_aio.so.6.0 00:04:54.929 SYMLINK libspdk_bdev_split.so 00:04:54.929 CC module/bdev/nvme/vbdev_opal.o 00:04:54.930 SYMLINK libspdk_bdev_zone_block.so 00:04:54.930 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:54.930 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:54.930 SYMLINK libspdk_bdev_aio.so 00:04:54.930 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:54.930 CC module/bdev/iscsi/bdev_iscsi.o 00:04:54.930 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:55.187 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:55.187 LIB libspdk_bdev_ftl.a 00:04:55.187 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:55.187 SO libspdk_bdev_ftl.so.6.0 00:04:55.187 SYMLINK libspdk_bdev_ftl.so 00:04:55.187 LIB libspdk_bdev_raid.a 00:04:55.445 SO libspdk_bdev_raid.so.6.0 00:04:55.445 LIB libspdk_bdev_iscsi.a 00:04:55.445 SO libspdk_bdev_iscsi.so.6.0 00:04:55.445 SYMLINK libspdk_bdev_raid.so 00:04:55.708 SYMLINK libspdk_bdev_iscsi.so 00:04:55.708 LIB libspdk_bdev_virtio.a 00:04:55.708 SO libspdk_bdev_virtio.so.6.0 00:04:55.708 SYMLINK libspdk_bdev_virtio.so 00:04:57.127 LIB libspdk_bdev_nvme.a 00:04:57.127 SO libspdk_bdev_nvme.so.7.1 00:04:57.385 SYMLINK libspdk_bdev_nvme.so 00:04:57.952 CC module/event/subsystems/scheduler/scheduler.o 00:04:57.952 CC module/event/subsystems/iobuf/iobuf.o 00:04:57.952 CC module/event/subsystems/keyring/keyring.o 00:04:57.952 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:57.952 CC module/event/subsystems/vmd/vmd.o 00:04:57.952 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:57.952 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:57.952 CC module/event/subsystems/sock/sock.o 00:04:57.952 CC module/event/subsystems/fsdev/fsdev.o 00:04:58.210 LIB libspdk_event_scheduler.a 00:04:58.210 LIB libspdk_event_vmd.a 00:04:58.210 LIB libspdk_event_vhost_blk.a 00:04:58.210 LIB libspdk_event_keyring.a 00:04:58.210 LIB libspdk_event_fsdev.a 00:04:58.210 LIB libspdk_event_sock.a 00:04:58.210 LIB libspdk_event_iobuf.a 00:04:58.210 SO libspdk_event_vhost_blk.so.3.0 00:04:58.210 SO libspdk_event_scheduler.so.4.0 00:04:58.211 SO libspdk_event_vmd.so.6.0 00:04:58.211 SO libspdk_event_keyring.so.1.0 00:04:58.211 SO libspdk_event_fsdev.so.1.0 00:04:58.211 SO libspdk_event_sock.so.5.0 00:04:58.211 SO libspdk_event_iobuf.so.3.0 00:04:58.211 SYMLINK libspdk_event_vhost_blk.so 00:04:58.211 SYMLINK libspdk_event_scheduler.so 00:04:58.211 SYMLINK libspdk_event_vmd.so 00:04:58.211 SYMLINK libspdk_event_sock.so 00:04:58.211 SYMLINK libspdk_event_keyring.so 00:04:58.211 SYMLINK libspdk_event_fsdev.so 00:04:58.211 SYMLINK libspdk_event_iobuf.so 00:04:58.782 CC module/event/subsystems/accel/accel.o 00:04:58.782 LIB libspdk_event_accel.a 00:04:58.782 SO libspdk_event_accel.so.6.0 00:04:59.041 SYMLINK libspdk_event_accel.so 00:04:59.298 CC module/event/subsystems/bdev/bdev.o 00:04:59.557 LIB libspdk_event_bdev.a 00:04:59.557 SO libspdk_event_bdev.so.6.0 00:04:59.557 SYMLINK libspdk_event_bdev.so 00:04:59.815 CC module/event/subsystems/nbd/nbd.o 00:04:59.816 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:59.816 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:59.816 CC module/event/subsystems/ublk/ublk.o 00:05:00.074 CC module/event/subsystems/scsi/scsi.o 00:05:00.074 LIB libspdk_event_nbd.a 00:05:00.074 LIB libspdk_event_ublk.a 00:05:00.074 SO libspdk_event_nbd.so.6.0 00:05:00.074 SO libspdk_event_ublk.so.3.0 00:05:00.074 LIB libspdk_event_scsi.a 00:05:00.074 LIB libspdk_event_nvmf.a 00:05:00.074 SYMLINK libspdk_event_nbd.so 00:05:00.074 SO libspdk_event_scsi.so.6.0 00:05:00.074 SYMLINK libspdk_event_ublk.so 00:05:00.074 SO libspdk_event_nvmf.so.6.0 00:05:00.333 SYMLINK libspdk_event_scsi.so 00:05:00.333 SYMLINK libspdk_event_nvmf.so 00:05:00.593 CC module/event/subsystems/iscsi/iscsi.o 00:05:00.593 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:00.851 LIB libspdk_event_iscsi.a 00:05:00.851 SO libspdk_event_iscsi.so.6.0 00:05:00.851 LIB libspdk_event_vhost_scsi.a 00:05:00.851 SO libspdk_event_vhost_scsi.so.3.0 00:05:00.851 SYMLINK libspdk_event_iscsi.so 00:05:00.851 SYMLINK libspdk_event_vhost_scsi.so 00:05:01.111 SO libspdk.so.6.0 00:05:01.111 SYMLINK libspdk.so 00:05:01.370 TEST_HEADER include/spdk/accel.h 00:05:01.370 CXX app/trace/trace.o 00:05:01.370 TEST_HEADER include/spdk/accel_module.h 00:05:01.370 CC test/rpc_client/rpc_client_test.o 00:05:01.370 TEST_HEADER include/spdk/assert.h 00:05:01.370 TEST_HEADER include/spdk/barrier.h 00:05:01.370 TEST_HEADER include/spdk/base64.h 00:05:01.370 TEST_HEADER include/spdk/bdev.h 00:05:01.370 TEST_HEADER include/spdk/bdev_module.h 00:05:01.370 TEST_HEADER include/spdk/bdev_zone.h 00:05:01.370 TEST_HEADER include/spdk/bit_array.h 00:05:01.370 TEST_HEADER include/spdk/bit_pool.h 00:05:01.370 TEST_HEADER include/spdk/blob_bdev.h 00:05:01.370 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:01.370 TEST_HEADER include/spdk/blobfs.h 00:05:01.370 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:01.370 TEST_HEADER include/spdk/blob.h 00:05:01.370 TEST_HEADER include/spdk/conf.h 00:05:01.370 TEST_HEADER include/spdk/config.h 00:05:01.370 TEST_HEADER include/spdk/cpuset.h 00:05:01.370 TEST_HEADER include/spdk/crc16.h 00:05:01.370 TEST_HEADER include/spdk/crc32.h 00:05:01.370 TEST_HEADER include/spdk/crc64.h 00:05:01.370 TEST_HEADER include/spdk/dif.h 00:05:01.370 TEST_HEADER include/spdk/dma.h 00:05:01.370 TEST_HEADER include/spdk/endian.h 00:05:01.370 TEST_HEADER include/spdk/env_dpdk.h 00:05:01.370 TEST_HEADER include/spdk/env.h 00:05:01.370 TEST_HEADER include/spdk/event.h 00:05:01.370 TEST_HEADER include/spdk/fd_group.h 00:05:01.370 TEST_HEADER include/spdk/fd.h 00:05:01.370 TEST_HEADER include/spdk/file.h 00:05:01.370 TEST_HEADER include/spdk/fsdev.h 00:05:01.370 TEST_HEADER include/spdk/fsdev_module.h 00:05:01.370 TEST_HEADER include/spdk/ftl.h 00:05:01.370 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:01.370 CC examples/ioat/perf/perf.o 00:05:01.370 TEST_HEADER include/spdk/gpt_spec.h 00:05:01.370 TEST_HEADER include/spdk/hexlify.h 00:05:01.370 TEST_HEADER include/spdk/histogram_data.h 00:05:01.370 TEST_HEADER include/spdk/idxd.h 00:05:01.370 TEST_HEADER include/spdk/idxd_spec.h 00:05:01.370 CC examples/util/zipf/zipf.o 00:05:01.370 TEST_HEADER include/spdk/init.h 00:05:01.370 TEST_HEADER include/spdk/ioat.h 00:05:01.370 CC test/thread/poller_perf/poller_perf.o 00:05:01.370 TEST_HEADER include/spdk/ioat_spec.h 00:05:01.630 CC test/dma/test_dma/test_dma.o 00:05:01.630 TEST_HEADER include/spdk/iscsi_spec.h 00:05:01.630 TEST_HEADER include/spdk/json.h 00:05:01.630 TEST_HEADER include/spdk/jsonrpc.h 00:05:01.630 TEST_HEADER include/spdk/keyring.h 00:05:01.630 TEST_HEADER include/spdk/keyring_module.h 00:05:01.630 CC test/app/bdev_svc/bdev_svc.o 00:05:01.630 TEST_HEADER include/spdk/likely.h 00:05:01.630 TEST_HEADER include/spdk/log.h 00:05:01.630 TEST_HEADER include/spdk/lvol.h 00:05:01.630 TEST_HEADER include/spdk/md5.h 00:05:01.630 TEST_HEADER include/spdk/memory.h 00:05:01.630 TEST_HEADER include/spdk/mmio.h 00:05:01.630 TEST_HEADER include/spdk/nbd.h 00:05:01.630 TEST_HEADER include/spdk/net.h 00:05:01.630 TEST_HEADER include/spdk/notify.h 00:05:01.630 TEST_HEADER include/spdk/nvme.h 00:05:01.630 TEST_HEADER include/spdk/nvme_intel.h 00:05:01.630 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:01.630 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:01.630 TEST_HEADER include/spdk/nvme_spec.h 00:05:01.630 TEST_HEADER include/spdk/nvme_zns.h 00:05:01.630 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:01.630 CC test/env/mem_callbacks/mem_callbacks.o 00:05:01.630 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:01.630 TEST_HEADER include/spdk/nvmf.h 00:05:01.630 TEST_HEADER include/spdk/nvmf_spec.h 00:05:01.630 TEST_HEADER include/spdk/nvmf_transport.h 00:05:01.630 TEST_HEADER include/spdk/opal.h 00:05:01.630 LINK rpc_client_test 00:05:01.630 TEST_HEADER include/spdk/opal_spec.h 00:05:01.630 TEST_HEADER include/spdk/pci_ids.h 00:05:01.630 TEST_HEADER include/spdk/pipe.h 00:05:01.630 TEST_HEADER include/spdk/queue.h 00:05:01.630 LINK interrupt_tgt 00:05:01.630 TEST_HEADER include/spdk/reduce.h 00:05:01.630 TEST_HEADER include/spdk/rpc.h 00:05:01.630 TEST_HEADER include/spdk/scheduler.h 00:05:01.630 TEST_HEADER include/spdk/scsi.h 00:05:01.630 TEST_HEADER include/spdk/scsi_spec.h 00:05:01.630 TEST_HEADER include/spdk/sock.h 00:05:01.630 TEST_HEADER include/spdk/stdinc.h 00:05:01.630 TEST_HEADER include/spdk/string.h 00:05:01.630 TEST_HEADER include/spdk/thread.h 00:05:01.630 TEST_HEADER include/spdk/trace.h 00:05:01.630 TEST_HEADER include/spdk/trace_parser.h 00:05:01.630 TEST_HEADER include/spdk/tree.h 00:05:01.630 TEST_HEADER include/spdk/ublk.h 00:05:01.630 TEST_HEADER include/spdk/util.h 00:05:01.630 TEST_HEADER include/spdk/uuid.h 00:05:01.630 TEST_HEADER include/spdk/version.h 00:05:01.630 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:01.630 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:01.630 TEST_HEADER include/spdk/vhost.h 00:05:01.630 TEST_HEADER include/spdk/vmd.h 00:05:01.630 TEST_HEADER include/spdk/xor.h 00:05:01.630 TEST_HEADER include/spdk/zipf.h 00:05:01.630 LINK zipf 00:05:01.630 CXX test/cpp_headers/accel.o 00:05:01.630 LINK poller_perf 00:05:01.630 LINK ioat_perf 00:05:01.630 LINK bdev_svc 00:05:01.890 LINK spdk_trace 00:05:01.890 CC test/env/vtophys/vtophys.o 00:05:01.890 CXX test/cpp_headers/accel_module.o 00:05:01.890 CC app/trace_record/trace_record.o 00:05:01.890 CC app/nvmf_tgt/nvmf_main.o 00:05:01.890 CC examples/ioat/verify/verify.o 00:05:01.890 CC app/iscsi_tgt/iscsi_tgt.o 00:05:01.890 LINK vtophys 00:05:02.149 CXX test/cpp_headers/assert.o 00:05:02.149 LINK test_dma 00:05:02.149 CC test/app/histogram_perf/histogram_perf.o 00:05:02.149 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:02.149 LINK mem_callbacks 00:05:02.149 LINK nvmf_tgt 00:05:02.149 CXX test/cpp_headers/barrier.o 00:05:02.149 LINK iscsi_tgt 00:05:02.149 LINK verify 00:05:02.149 LINK spdk_trace_record 00:05:02.149 LINK histogram_perf 00:05:02.408 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:02.408 CXX test/cpp_headers/base64.o 00:05:02.408 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:02.408 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:02.408 CXX test/cpp_headers/bdev.o 00:05:02.408 CXX test/cpp_headers/bdev_module.o 00:05:02.408 CXX test/cpp_headers/bdev_zone.o 00:05:02.408 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:02.408 LINK env_dpdk_post_init 00:05:02.675 CC app/spdk_tgt/spdk_tgt.o 00:05:02.675 LINK nvme_fuzz 00:05:02.675 CXX test/cpp_headers/bit_array.o 00:05:02.675 CC examples/thread/thread/thread_ex.o 00:05:02.675 CC examples/sock/hello_world/hello_sock.o 00:05:02.675 CC examples/vmd/lsvmd/lsvmd.o 00:05:02.675 CC examples/idxd/perf/perf.o 00:05:02.675 CXX test/cpp_headers/bit_pool.o 00:05:02.675 LINK spdk_tgt 00:05:02.675 CXX test/cpp_headers/blob_bdev.o 00:05:02.675 CC test/env/memory/memory_ut.o 00:05:02.942 LINK lsvmd 00:05:02.942 LINK thread 00:05:02.942 LINK hello_sock 00:05:02.942 LINK vhost_fuzz 00:05:02.942 CC examples/vmd/led/led.o 00:05:02.942 CXX test/cpp_headers/blobfs_bdev.o 00:05:02.942 CXX test/cpp_headers/blobfs.o 00:05:02.942 CXX test/cpp_headers/blob.o 00:05:02.942 CXX test/cpp_headers/conf.o 00:05:02.942 CC app/spdk_lspci/spdk_lspci.o 00:05:03.201 LINK idxd_perf 00:05:03.201 LINK led 00:05:03.201 CC app/spdk_nvme_perf/perf.o 00:05:03.201 LINK spdk_lspci 00:05:03.201 CXX test/cpp_headers/config.o 00:05:03.201 CXX test/cpp_headers/cpuset.o 00:05:03.201 CC app/spdk_nvme_identify/identify.o 00:05:03.201 CC app/spdk_nvme_discover/discovery_aer.o 00:05:03.201 CC app/spdk_top/spdk_top.o 00:05:03.461 CXX test/cpp_headers/crc16.o 00:05:03.461 CXX test/cpp_headers/crc32.o 00:05:03.461 CC app/vhost/vhost.o 00:05:03.461 LINK spdk_nvme_discover 00:05:03.461 CC examples/nvme/hello_world/hello_world.o 00:05:03.721 CXX test/cpp_headers/crc64.o 00:05:03.721 CC test/env/pci/pci_ut.o 00:05:03.721 LINK vhost 00:05:03.721 LINK hello_world 00:05:03.721 CC app/spdk_dd/spdk_dd.o 00:05:03.721 CXX test/cpp_headers/dif.o 00:05:04.016 CXX test/cpp_headers/dma.o 00:05:04.016 LINK memory_ut 00:05:04.016 CC examples/nvme/reconnect/reconnect.o 00:05:04.016 CXX test/cpp_headers/endian.o 00:05:04.016 LINK pci_ut 00:05:04.016 LINK spdk_nvme_perf 00:05:04.275 CC app/fio/nvme/fio_plugin.o 00:05:04.275 LINK spdk_dd 00:05:04.275 LINK spdk_nvme_identify 00:05:04.275 CXX test/cpp_headers/env_dpdk.o 00:05:04.275 LINK iscsi_fuzz 00:05:04.275 LINK spdk_top 00:05:04.275 CC app/fio/bdev/fio_plugin.o 00:05:04.275 CC test/app/jsoncat/jsoncat.o 00:05:04.275 CXX test/cpp_headers/env.o 00:05:04.539 CXX test/cpp_headers/event.o 00:05:04.539 CC test/app/stub/stub.o 00:05:04.539 CXX test/cpp_headers/fd_group.o 00:05:04.539 LINK reconnect 00:05:04.539 CXX test/cpp_headers/fd.o 00:05:04.539 LINK jsoncat 00:05:04.539 CC test/event/event_perf/event_perf.o 00:05:04.539 CC test/event/reactor/reactor.o 00:05:04.539 LINK stub 00:05:04.804 CXX test/cpp_headers/file.o 00:05:04.804 CXX test/cpp_headers/fsdev.o 00:05:04.804 CC test/event/reactor_perf/reactor_perf.o 00:05:04.804 CC test/event/app_repeat/app_repeat.o 00:05:04.804 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:04.804 LINK event_perf 00:05:04.804 LINK spdk_nvme 00:05:04.804 LINK reactor 00:05:04.804 CXX test/cpp_headers/fsdev_module.o 00:05:04.804 CXX test/cpp_headers/ftl.o 00:05:04.804 LINK reactor_perf 00:05:04.805 LINK spdk_bdev 00:05:04.805 LINK app_repeat 00:05:05.062 CC test/nvme/aer/aer.o 00:05:05.062 CXX test/cpp_headers/fuse_dispatcher.o 00:05:05.062 CC test/nvme/reset/reset.o 00:05:05.062 CC test/accel/dif/dif.o 00:05:05.062 CC test/nvme/sgl/sgl.o 00:05:05.062 CC examples/nvme/arbitration/arbitration.o 00:05:05.062 CC test/event/scheduler/scheduler.o 00:05:05.062 CC test/nvme/e2edp/nvme_dp.o 00:05:05.062 CXX test/cpp_headers/gpt_spec.o 00:05:05.321 CC test/blobfs/mkfs/mkfs.o 00:05:05.321 LINK nvme_manage 00:05:05.321 LINK aer 00:05:05.321 LINK reset 00:05:05.321 CXX test/cpp_headers/hexlify.o 00:05:05.321 LINK scheduler 00:05:05.321 LINK sgl 00:05:05.321 LINK nvme_dp 00:05:05.580 LINK mkfs 00:05:05.580 LINK arbitration 00:05:05.580 CXX test/cpp_headers/histogram_data.o 00:05:05.580 CC examples/nvme/hotplug/hotplug.o 00:05:05.580 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:05.580 CXX test/cpp_headers/idxd.o 00:05:05.580 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:05.839 CC test/nvme/overhead/overhead.o 00:05:05.839 CC test/nvme/err_injection/err_injection.o 00:05:05.839 CC examples/accel/perf/accel_perf.o 00:05:05.839 CC test/lvol/esnap/esnap.o 00:05:05.839 LINK cmb_copy 00:05:05.839 LINK hotplug 00:05:05.839 CXX test/cpp_headers/idxd_spec.o 00:05:05.839 CC examples/blob/hello_world/hello_blob.o 00:05:05.839 LINK dif 00:05:05.839 LINK hello_fsdev 00:05:06.099 LINK err_injection 00:05:06.099 CXX test/cpp_headers/init.o 00:05:06.099 LINK overhead 00:05:06.099 CC examples/nvme/abort/abort.o 00:05:06.099 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:06.099 LINK hello_blob 00:05:06.099 CXX test/cpp_headers/ioat.o 00:05:06.357 CC test/nvme/startup/startup.o 00:05:06.357 CC test/nvme/reserve/reserve.o 00:05:06.357 LINK pmr_persistence 00:05:06.357 CXX test/cpp_headers/ioat_spec.o 00:05:06.357 CC test/bdev/bdevio/bdevio.o 00:05:06.357 LINK accel_perf 00:05:06.357 CXX test/cpp_headers/iscsi_spec.o 00:05:06.357 CC examples/blob/cli/blobcli.o 00:05:06.357 LINK startup 00:05:06.617 LINK reserve 00:05:06.617 CXX test/cpp_headers/json.o 00:05:06.617 LINK abort 00:05:06.617 CC test/nvme/simple_copy/simple_copy.o 00:05:06.617 CC test/nvme/connect_stress/connect_stress.o 00:05:06.617 CXX test/cpp_headers/jsonrpc.o 00:05:06.875 CC test/nvme/boot_partition/boot_partition.o 00:05:06.875 CC test/nvme/compliance/nvme_compliance.o 00:05:06.875 LINK bdevio 00:05:06.875 LINK simple_copy 00:05:06.875 CC test/nvme/fused_ordering/fused_ordering.o 00:05:06.875 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:06.875 LINK connect_stress 00:05:06.875 CXX test/cpp_headers/keyring.o 00:05:06.875 LINK boot_partition 00:05:07.134 LINK blobcli 00:05:07.134 CXX test/cpp_headers/keyring_module.o 00:05:07.134 LINK fused_ordering 00:05:07.134 CXX test/cpp_headers/likely.o 00:05:07.134 LINK doorbell_aers 00:05:07.134 CC test/nvme/fdp/fdp.o 00:05:07.134 CC test/nvme/cuse/cuse.o 00:05:07.134 LINK nvme_compliance 00:05:07.134 CC examples/bdev/hello_world/hello_bdev.o 00:05:07.134 CXX test/cpp_headers/log.o 00:05:07.394 CXX test/cpp_headers/lvol.o 00:05:07.394 CXX test/cpp_headers/md5.o 00:05:07.394 CXX test/cpp_headers/memory.o 00:05:07.394 CXX test/cpp_headers/mmio.o 00:05:07.394 CC examples/bdev/bdevperf/bdevperf.o 00:05:07.394 CXX test/cpp_headers/nbd.o 00:05:07.394 CXX test/cpp_headers/net.o 00:05:07.394 CXX test/cpp_headers/notify.o 00:05:07.394 CXX test/cpp_headers/nvme.o 00:05:07.394 LINK hello_bdev 00:05:07.653 CXX test/cpp_headers/nvme_intel.o 00:05:07.653 CXX test/cpp_headers/nvme_ocssd.o 00:05:07.653 LINK fdp 00:05:07.653 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:07.653 CXX test/cpp_headers/nvme_spec.o 00:05:07.653 CXX test/cpp_headers/nvme_zns.o 00:05:07.653 CXX test/cpp_headers/nvmf_cmd.o 00:05:07.653 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:07.653 CXX test/cpp_headers/nvmf.o 00:05:07.653 CXX test/cpp_headers/nvmf_spec.o 00:05:07.653 CXX test/cpp_headers/nvmf_transport.o 00:05:07.911 CXX test/cpp_headers/opal.o 00:05:07.911 CXX test/cpp_headers/opal_spec.o 00:05:07.911 CXX test/cpp_headers/pci_ids.o 00:05:07.911 CXX test/cpp_headers/pipe.o 00:05:07.911 CXX test/cpp_headers/queue.o 00:05:07.911 CXX test/cpp_headers/reduce.o 00:05:07.911 CXX test/cpp_headers/rpc.o 00:05:07.911 CXX test/cpp_headers/scheduler.o 00:05:07.911 CXX test/cpp_headers/scsi.o 00:05:07.911 CXX test/cpp_headers/scsi_spec.o 00:05:07.911 CXX test/cpp_headers/sock.o 00:05:07.911 CXX test/cpp_headers/stdinc.o 00:05:08.170 CXX test/cpp_headers/string.o 00:05:08.170 CXX test/cpp_headers/thread.o 00:05:08.170 CXX test/cpp_headers/trace.o 00:05:08.170 CXX test/cpp_headers/trace_parser.o 00:05:08.170 CXX test/cpp_headers/tree.o 00:05:08.170 CXX test/cpp_headers/ublk.o 00:05:08.170 CXX test/cpp_headers/util.o 00:05:08.170 CXX test/cpp_headers/uuid.o 00:05:08.170 CXX test/cpp_headers/version.o 00:05:08.170 CXX test/cpp_headers/vfio_user_pci.o 00:05:08.170 CXX test/cpp_headers/vfio_user_spec.o 00:05:08.429 CXX test/cpp_headers/vhost.o 00:05:08.429 CXX test/cpp_headers/vmd.o 00:05:08.429 LINK bdevperf 00:05:08.429 CXX test/cpp_headers/xor.o 00:05:08.429 CXX test/cpp_headers/zipf.o 00:05:08.687 LINK cuse 00:05:08.946 CC examples/nvmf/nvmf/nvmf.o 00:05:09.514 LINK nvmf 00:05:12.054 LINK esnap 00:05:12.631 00:05:12.631 real 1m33.743s 00:05:12.631 user 8m3.538s 00:05:12.631 sys 1m46.083s 00:05:12.631 04:22:08 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:12.631 04:22:08 make -- common/autotest_common.sh@10 -- $ set +x 00:05:12.631 ************************************ 00:05:12.631 END TEST make 00:05:12.631 ************************************ 00:05:12.631 04:22:08 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:12.631 04:22:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:12.632 04:22:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:12.632 04:22:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:12.632 04:22:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:12.632 04:22:08 -- pm/common@44 -- $ pid=5473 00:05:12.632 04:22:08 -- pm/common@50 -- $ kill -TERM 5473 00:05:12.632 04:22:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:12.632 04:22:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:12.632 04:22:08 -- pm/common@44 -- $ pid=5475 00:05:12.632 04:22:08 -- pm/common@50 -- $ kill -TERM 5475 00:05:12.632 04:22:08 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:12.632 04:22:08 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:12.632 04:22:09 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:12.632 04:22:09 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:12.632 04:22:09 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.632 04:22:09 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.632 04:22:09 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.632 04:22:09 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.632 04:22:09 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.632 04:22:09 -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.632 04:22:09 -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.632 04:22:09 -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.632 04:22:09 -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.632 04:22:09 -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.632 04:22:09 -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.632 04:22:09 -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.632 04:22:09 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.632 04:22:09 -- scripts/common.sh@344 -- # case "$op" in 00:05:12.632 04:22:09 -- scripts/common.sh@345 -- # : 1 00:05:12.632 04:22:09 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.632 04:22:09 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.632 04:22:09 -- scripts/common.sh@365 -- # decimal 1 00:05:12.632 04:22:09 -- scripts/common.sh@353 -- # local d=1 00:05:12.632 04:22:09 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.632 04:22:09 -- scripts/common.sh@355 -- # echo 1 00:05:12.632 04:22:09 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.632 04:22:09 -- scripts/common.sh@366 -- # decimal 2 00:05:12.632 04:22:09 -- scripts/common.sh@353 -- # local d=2 00:05:12.632 04:22:09 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.632 04:22:09 -- scripts/common.sh@355 -- # echo 2 00:05:12.632 04:22:09 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.632 04:22:09 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.632 04:22:09 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.632 04:22:09 -- scripts/common.sh@368 -- # return 0 00:05:12.632 04:22:09 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.632 04:22:09 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.632 --rc genhtml_branch_coverage=1 00:05:12.632 --rc genhtml_function_coverage=1 00:05:12.632 --rc genhtml_legend=1 00:05:12.632 --rc geninfo_all_blocks=1 00:05:12.632 --rc geninfo_unexecuted_blocks=1 00:05:12.632 00:05:12.632 ' 00:05:12.632 04:22:09 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.632 --rc genhtml_branch_coverage=1 00:05:12.632 --rc genhtml_function_coverage=1 00:05:12.632 --rc genhtml_legend=1 00:05:12.632 --rc geninfo_all_blocks=1 00:05:12.632 --rc geninfo_unexecuted_blocks=1 00:05:12.632 00:05:12.632 ' 00:05:12.632 04:22:09 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.632 --rc genhtml_branch_coverage=1 00:05:12.632 --rc genhtml_function_coverage=1 00:05:12.632 --rc genhtml_legend=1 00:05:12.632 --rc geninfo_all_blocks=1 00:05:12.632 --rc geninfo_unexecuted_blocks=1 00:05:12.632 00:05:12.632 ' 00:05:12.632 04:22:09 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.632 --rc genhtml_branch_coverage=1 00:05:12.632 --rc genhtml_function_coverage=1 00:05:12.632 --rc genhtml_legend=1 00:05:12.632 --rc geninfo_all_blocks=1 00:05:12.632 --rc geninfo_unexecuted_blocks=1 00:05:12.632 00:05:12.632 ' 00:05:12.632 04:22:09 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:12.632 04:22:09 -- nvmf/common.sh@7 -- # uname -s 00:05:12.632 04:22:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:12.632 04:22:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:12.632 04:22:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:12.632 04:22:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:12.632 04:22:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:12.632 04:22:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:12.632 04:22:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:12.632 04:22:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:12.632 04:22:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:12.632 04:22:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:12.891 04:22:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d5e0a0d0-cc2e-4042-b160-ee4b4435e5e2 00:05:12.891 04:22:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=d5e0a0d0-cc2e-4042-b160-ee4b4435e5e2 00:05:12.891 04:22:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:12.892 04:22:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:12.892 04:22:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:12.892 04:22:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:12.892 04:22:09 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:12.892 04:22:09 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:12.892 04:22:09 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:12.892 04:22:09 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:12.892 04:22:09 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:12.892 04:22:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.892 04:22:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.892 04:22:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.892 04:22:09 -- paths/export.sh@5 -- # export PATH 00:05:12.892 04:22:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.892 04:22:09 -- nvmf/common.sh@51 -- # : 0 00:05:12.892 04:22:09 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:12.892 04:22:09 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:12.892 04:22:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:12.892 04:22:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:12.892 04:22:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:12.892 04:22:09 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:12.892 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:12.892 04:22:09 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:12.892 04:22:09 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:12.892 04:22:09 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:12.892 04:22:09 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:12.892 04:22:09 -- spdk/autotest.sh@32 -- # uname -s 00:05:12.892 04:22:09 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:12.892 04:22:09 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:12.892 04:22:09 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:12.892 04:22:09 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:12.892 04:22:09 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:12.892 04:22:09 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:12.892 04:22:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:12.892 04:22:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:12.892 04:22:09 -- spdk/autotest.sh@48 -- # udevadm_pid=54534 00:05:12.892 04:22:09 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:12.892 04:22:09 -- pm/common@17 -- # local monitor 00:05:12.892 04:22:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:12.892 04:22:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:12.892 04:22:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:12.892 04:22:09 -- pm/common@25 -- # sleep 1 00:05:12.892 04:22:09 -- pm/common@21 -- # date +%s 00:05:12.892 04:22:09 -- pm/common@21 -- # date +%s 00:05:12.892 04:22:09 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732681329 00:05:12.892 04:22:09 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732681329 00:05:12.892 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732681329_collect-cpu-load.pm.log 00:05:12.892 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732681329_collect-vmstat.pm.log 00:05:13.832 04:22:10 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:13.832 04:22:10 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:13.832 04:22:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:13.832 04:22:10 -- common/autotest_common.sh@10 -- # set +x 00:05:13.832 04:22:10 -- spdk/autotest.sh@59 -- # create_test_list 00:05:13.832 04:22:10 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:13.832 04:22:10 -- common/autotest_common.sh@10 -- # set +x 00:05:13.832 04:22:10 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:13.832 04:22:10 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:13.832 04:22:10 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:13.832 04:22:10 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:13.832 04:22:10 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:13.832 04:22:10 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:13.832 04:22:10 -- common/autotest_common.sh@1457 -- # uname 00:05:13.832 04:22:10 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:13.832 04:22:10 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:13.832 04:22:10 -- common/autotest_common.sh@1477 -- # uname 00:05:13.832 04:22:10 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:13.832 04:22:10 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:13.832 04:22:10 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:14.092 lcov: LCOV version 1.15 00:05:14.092 04:22:10 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:28.983 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:28.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:43.897 04:22:37 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:43.897 04:22:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:43.897 04:22:37 -- common/autotest_common.sh@10 -- # set +x 00:05:43.897 04:22:37 -- spdk/autotest.sh@78 -- # rm -f 00:05:43.897 04:22:37 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:43.897 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:43.897 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:43.897 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:43.897 04:22:38 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:43.897 04:22:38 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:43.897 04:22:38 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:43.897 04:22:38 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:43.897 04:22:38 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:43.897 04:22:38 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:43.897 04:22:38 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:43.897 04:22:38 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:43.897 04:22:38 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:43.897 04:22:38 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:43.897 04:22:38 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:43.897 04:22:38 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:43.897 04:22:38 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:43.897 04:22:38 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:43.897 04:22:38 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:43.897 04:22:38 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:05:43.897 04:22:38 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:43.897 04:22:38 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:43.897 04:22:38 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:43.897 04:22:38 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:43.897 04:22:38 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:05:43.897 04:22:38 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:43.897 04:22:38 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:43.897 04:22:38 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:43.897 04:22:38 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:43.897 04:22:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:43.897 04:22:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:43.897 04:22:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:43.897 04:22:38 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:43.897 04:22:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:43.897 No valid GPT data, bailing 00:05:43.897 04:22:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:43.897 04:22:38 -- scripts/common.sh@394 -- # pt= 00:05:43.897 04:22:38 -- scripts/common.sh@395 -- # return 1 00:05:43.897 04:22:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:43.897 1+0 records in 00:05:43.897 1+0 records out 00:05:43.897 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00414526 s, 253 MB/s 00:05:43.897 04:22:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:43.897 04:22:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:43.897 04:22:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:43.897 04:22:38 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:43.897 04:22:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:43.897 No valid GPT data, bailing 00:05:43.897 04:22:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:43.897 04:22:39 -- scripts/common.sh@394 -- # pt= 00:05:43.897 04:22:39 -- scripts/common.sh@395 -- # return 1 00:05:43.897 04:22:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:43.897 1+0 records in 00:05:43.897 1+0 records out 00:05:43.897 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00575264 s, 182 MB/s 00:05:43.897 04:22:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:43.897 04:22:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:43.898 04:22:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:43.898 04:22:39 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:43.898 04:22:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:43.898 No valid GPT data, bailing 00:05:43.898 04:22:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:43.898 04:22:39 -- scripts/common.sh@394 -- # pt= 00:05:43.898 04:22:39 -- scripts/common.sh@395 -- # return 1 00:05:43.898 04:22:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:43.898 1+0 records in 00:05:43.898 1+0 records out 00:05:43.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00402033 s, 261 MB/s 00:05:43.898 04:22:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:43.898 04:22:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:43.898 04:22:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:43.898 04:22:39 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:43.898 04:22:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:43.898 No valid GPT data, bailing 00:05:43.898 04:22:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:43.898 04:22:39 -- scripts/common.sh@394 -- # pt= 00:05:43.898 04:22:39 -- scripts/common.sh@395 -- # return 1 00:05:43.898 04:22:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:43.898 1+0 records in 00:05:43.898 1+0 records out 00:05:43.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00605695 s, 173 MB/s 00:05:43.898 04:22:39 -- spdk/autotest.sh@105 -- # sync 00:05:43.898 04:22:39 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:43.898 04:22:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:43.898 04:22:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:45.803 04:22:42 -- spdk/autotest.sh@111 -- # uname -s 00:05:45.803 04:22:42 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:45.803 04:22:42 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:45.803 04:22:42 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:46.371 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:46.371 Hugepages 00:05:46.371 node hugesize free / total 00:05:46.371 node0 1048576kB 0 / 0 00:05:46.371 node0 2048kB 0 / 0 00:05:46.371 00:05:46.371 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:46.371 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:46.630 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:46.630 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:46.630 04:22:43 -- spdk/autotest.sh@117 -- # uname -s 00:05:46.630 04:22:43 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:46.630 04:22:43 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:46.630 04:22:43 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:47.567 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:47.567 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.826 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.826 04:22:44 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:48.763 04:22:45 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:48.763 04:22:45 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:48.763 04:22:45 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:48.763 04:22:45 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:48.763 04:22:45 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:48.764 04:22:45 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:48.764 04:22:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:48.764 04:22:45 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:48.764 04:22:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:48.764 04:22:45 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:48.764 04:22:45 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:48.764 04:22:45 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:49.330 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:49.330 Waiting for block devices as requested 00:05:49.330 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:49.589 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:49.589 04:22:46 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:49.589 04:22:46 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:49.589 04:22:46 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:49.589 04:22:46 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:49.589 04:22:46 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:49.589 04:22:46 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:49.589 04:22:46 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:49.589 04:22:46 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:49.589 04:22:46 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:49.589 04:22:46 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:49.589 04:22:46 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:49.589 04:22:46 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:49.589 04:22:46 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:49.589 04:22:46 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:49.589 04:22:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:49.589 04:22:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:49.589 04:22:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:49.589 04:22:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:49.589 04:22:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:49.589 04:22:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:49.589 04:22:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:49.589 04:22:46 -- common/autotest_common.sh@1543 -- # continue 00:05:49.589 04:22:46 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:49.589 04:22:46 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:49.589 04:22:46 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:49.589 04:22:46 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:49.589 04:22:46 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:49.589 04:22:46 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:49.589 04:22:46 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:49.589 04:22:46 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:49.589 04:22:46 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:49.589 04:22:46 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:49.589 04:22:46 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:49.589 04:22:46 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:49.589 04:22:46 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:49.589 04:22:46 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:49.589 04:22:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:49.589 04:22:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:49.589 04:22:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:49.589 04:22:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:49.589 04:22:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:49.589 04:22:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:49.589 04:22:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:49.589 04:22:46 -- common/autotest_common.sh@1543 -- # continue 00:05:49.589 04:22:46 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:49.589 04:22:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:49.589 04:22:46 -- common/autotest_common.sh@10 -- # set +x 00:05:49.848 04:22:46 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:49.848 04:22:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:49.848 04:22:46 -- common/autotest_common.sh@10 -- # set +x 00:05:49.848 04:22:46 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:50.785 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:50.785 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:50.785 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:50.785 04:22:47 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:50.785 04:22:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:50.785 04:22:47 -- common/autotest_common.sh@10 -- # set +x 00:05:50.785 04:22:47 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:50.785 04:22:47 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:50.785 04:22:47 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:50.785 04:22:47 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:50.785 04:22:47 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:50.785 04:22:47 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:50.785 04:22:47 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:50.785 04:22:47 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:50.785 04:22:47 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:50.785 04:22:47 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:50.785 04:22:47 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:50.785 04:22:47 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:50.785 04:22:47 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:51.044 04:22:47 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:51.044 04:22:47 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:51.044 04:22:47 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:51.044 04:22:47 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:51.044 04:22:47 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:51.044 04:22:47 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:51.044 04:22:47 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:51.044 04:22:47 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:51.044 04:22:47 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:51.044 04:22:47 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:51.044 04:22:47 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:51.044 04:22:47 -- common/autotest_common.sh@1572 -- # return 0 00:05:51.044 04:22:47 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:51.044 04:22:47 -- common/autotest_common.sh@1580 -- # return 0 00:05:51.044 04:22:47 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:51.044 04:22:47 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:51.044 04:22:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:51.044 04:22:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:51.044 04:22:47 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:51.044 04:22:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:51.044 04:22:47 -- common/autotest_common.sh@10 -- # set +x 00:05:51.044 04:22:47 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:51.044 04:22:47 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:51.044 04:22:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.044 04:22:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.044 04:22:47 -- common/autotest_common.sh@10 -- # set +x 00:05:51.044 ************************************ 00:05:51.044 START TEST env 00:05:51.044 ************************************ 00:05:51.044 04:22:47 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:51.044 * Looking for test storage... 00:05:51.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:51.044 04:22:47 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.044 04:22:47 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.044 04:22:47 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:51.303 04:22:47 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:51.303 04:22:47 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.303 04:22:47 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.303 04:22:47 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.303 04:22:47 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.303 04:22:47 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.303 04:22:47 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.303 04:22:47 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.303 04:22:47 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.303 04:22:47 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.303 04:22:47 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.303 04:22:47 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.303 04:22:47 env -- scripts/common.sh@344 -- # case "$op" in 00:05:51.303 04:22:47 env -- scripts/common.sh@345 -- # : 1 00:05:51.303 04:22:47 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.303 04:22:47 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.303 04:22:47 env -- scripts/common.sh@365 -- # decimal 1 00:05:51.303 04:22:47 env -- scripts/common.sh@353 -- # local d=1 00:05:51.303 04:22:47 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.303 04:22:47 env -- scripts/common.sh@355 -- # echo 1 00:05:51.303 04:22:47 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.303 04:22:47 env -- scripts/common.sh@366 -- # decimal 2 00:05:51.303 04:22:47 env -- scripts/common.sh@353 -- # local d=2 00:05:51.303 04:22:47 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.303 04:22:47 env -- scripts/common.sh@355 -- # echo 2 00:05:51.303 04:22:47 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.303 04:22:47 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.303 04:22:47 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.303 04:22:47 env -- scripts/common.sh@368 -- # return 0 00:05:51.303 04:22:47 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.303 04:22:47 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:51.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.303 --rc genhtml_branch_coverage=1 00:05:51.303 --rc genhtml_function_coverage=1 00:05:51.303 --rc genhtml_legend=1 00:05:51.303 --rc geninfo_all_blocks=1 00:05:51.303 --rc geninfo_unexecuted_blocks=1 00:05:51.303 00:05:51.303 ' 00:05:51.303 04:22:47 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:51.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.303 --rc genhtml_branch_coverage=1 00:05:51.303 --rc genhtml_function_coverage=1 00:05:51.303 --rc genhtml_legend=1 00:05:51.303 --rc geninfo_all_blocks=1 00:05:51.303 --rc geninfo_unexecuted_blocks=1 00:05:51.303 00:05:51.303 ' 00:05:51.303 04:22:47 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:51.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.303 --rc genhtml_branch_coverage=1 00:05:51.303 --rc genhtml_function_coverage=1 00:05:51.303 --rc genhtml_legend=1 00:05:51.303 --rc geninfo_all_blocks=1 00:05:51.303 --rc geninfo_unexecuted_blocks=1 00:05:51.303 00:05:51.303 ' 00:05:51.303 04:22:47 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:51.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.303 --rc genhtml_branch_coverage=1 00:05:51.303 --rc genhtml_function_coverage=1 00:05:51.303 --rc genhtml_legend=1 00:05:51.303 --rc geninfo_all_blocks=1 00:05:51.304 --rc geninfo_unexecuted_blocks=1 00:05:51.304 00:05:51.304 ' 00:05:51.304 04:22:47 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:51.304 04:22:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.304 04:22:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.304 04:22:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:51.304 ************************************ 00:05:51.304 START TEST env_memory 00:05:51.304 ************************************ 00:05:51.304 04:22:47 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:51.304 00:05:51.304 00:05:51.304 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.304 http://cunit.sourceforge.net/ 00:05:51.304 00:05:51.304 00:05:51.304 Suite: memory 00:05:51.304 Test: alloc and free memory map ...[2024-11-27 04:22:47.763446] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:51.304 passed 00:05:51.304 Test: mem map translation ...[2024-11-27 04:22:47.810121] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:51.304 [2024-11-27 04:22:47.810179] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:51.304 [2024-11-27 04:22:47.810251] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:51.304 [2024-11-27 04:22:47.810288] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:51.304 passed 00:05:51.304 Test: mem map registration ...[2024-11-27 04:22:47.879028] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:51.304 [2024-11-27 04:22:47.879073] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:51.563 passed 00:05:51.563 Test: mem map adjacent registrations ...passed 00:05:51.563 00:05:51.563 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.563 suites 1 1 n/a 0 0 00:05:51.563 tests 4 4 4 0 0 00:05:51.563 asserts 152 152 152 0 n/a 00:05:51.563 00:05:51.563 Elapsed time = 0.250 seconds 00:05:51.563 00:05:51.563 real 0m0.306s 00:05:51.563 user 0m0.258s 00:05:51.563 sys 0m0.036s 00:05:51.563 04:22:47 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.563 04:22:47 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:51.563 ************************************ 00:05:51.563 END TEST env_memory 00:05:51.563 ************************************ 00:05:51.563 04:22:48 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:51.563 04:22:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.563 04:22:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.563 04:22:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:51.563 ************************************ 00:05:51.563 START TEST env_vtophys 00:05:51.563 ************************************ 00:05:51.563 04:22:48 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:51.563 EAL: lib.eal log level changed from notice to debug 00:05:51.563 EAL: Detected lcore 0 as core 0 on socket 0 00:05:51.563 EAL: Detected lcore 1 as core 0 on socket 0 00:05:51.563 EAL: Detected lcore 2 as core 0 on socket 0 00:05:51.563 EAL: Detected lcore 3 as core 0 on socket 0 00:05:51.563 EAL: Detected lcore 4 as core 0 on socket 0 00:05:51.563 EAL: Detected lcore 5 as core 0 on socket 0 00:05:51.563 EAL: Detected lcore 6 as core 0 on socket 0 00:05:51.563 EAL: Detected lcore 7 as core 0 on socket 0 00:05:51.563 EAL: Detected lcore 8 as core 0 on socket 0 00:05:51.563 EAL: Detected lcore 9 as core 0 on socket 0 00:05:51.563 EAL: Maximum logical cores by configuration: 128 00:05:51.563 EAL: Detected CPU lcores: 10 00:05:51.563 EAL: Detected NUMA nodes: 1 00:05:51.563 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:51.563 EAL: Detected shared linkage of DPDK 00:05:51.563 EAL: No shared files mode enabled, IPC will be disabled 00:05:51.563 EAL: Selected IOVA mode 'PA' 00:05:51.563 EAL: Probing VFIO support... 00:05:51.563 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:51.563 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:51.563 EAL: Ask a virtual area of 0x2e000 bytes 00:05:51.563 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:51.563 EAL: Setting up physically contiguous memory... 00:05:51.563 EAL: Setting maximum number of open files to 524288 00:05:51.563 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:51.563 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:51.563 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.563 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:51.563 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:51.563 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.563 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:51.563 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:51.563 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.563 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:51.563 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:51.563 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.563 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:51.563 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:51.563 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.563 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:51.563 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:51.563 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.563 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:51.563 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:51.563 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.563 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:51.563 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:51.563 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.563 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:51.563 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:51.563 EAL: Hugepages will be freed exactly as allocated. 00:05:51.563 EAL: No shared files mode enabled, IPC is disabled 00:05:51.563 EAL: No shared files mode enabled, IPC is disabled 00:05:51.823 EAL: TSC frequency is ~2290000 KHz 00:05:51.823 EAL: Main lcore 0 is ready (tid=7fd3099e3a40;cpuset=[0]) 00:05:51.823 EAL: Trying to obtain current memory policy. 00:05:51.823 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.823 EAL: Restoring previous memory policy: 0 00:05:51.823 EAL: request: mp_malloc_sync 00:05:51.823 EAL: No shared files mode enabled, IPC is disabled 00:05:51.823 EAL: Heap on socket 0 was expanded by 2MB 00:05:51.823 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:51.823 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:51.823 EAL: Mem event callback 'spdk:(nil)' registered 00:05:51.823 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:51.823 00:05:51.823 00:05:51.823 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.823 http://cunit.sourceforge.net/ 00:05:51.823 00:05:51.823 00:05:51.823 Suite: components_suite 00:05:52.391 Test: vtophys_malloc_test ...passed 00:05:52.391 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:52.391 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.391 EAL: Restoring previous memory policy: 4 00:05:52.391 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.391 EAL: request: mp_malloc_sync 00:05:52.391 EAL: No shared files mode enabled, IPC is disabled 00:05:52.391 EAL: Heap on socket 0 was expanded by 4MB 00:05:52.391 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.391 EAL: request: mp_malloc_sync 00:05:52.391 EAL: No shared files mode enabled, IPC is disabled 00:05:52.391 EAL: Heap on socket 0 was shrunk by 4MB 00:05:52.391 EAL: Trying to obtain current memory policy. 00:05:52.391 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.391 EAL: Restoring previous memory policy: 4 00:05:52.391 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.391 EAL: request: mp_malloc_sync 00:05:52.391 EAL: No shared files mode enabled, IPC is disabled 00:05:52.391 EAL: Heap on socket 0 was expanded by 6MB 00:05:52.391 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.391 EAL: request: mp_malloc_sync 00:05:52.391 EAL: No shared files mode enabled, IPC is disabled 00:05:52.391 EAL: Heap on socket 0 was shrunk by 6MB 00:05:52.391 EAL: Trying to obtain current memory policy. 00:05:52.391 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.391 EAL: Restoring previous memory policy: 4 00:05:52.391 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.391 EAL: request: mp_malloc_sync 00:05:52.391 EAL: No shared files mode enabled, IPC is disabled 00:05:52.391 EAL: Heap on socket 0 was expanded by 10MB 00:05:52.391 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.391 EAL: request: mp_malloc_sync 00:05:52.391 EAL: No shared files mode enabled, IPC is disabled 00:05:52.391 EAL: Heap on socket 0 was shrunk by 10MB 00:05:52.391 EAL: Trying to obtain current memory policy. 00:05:52.391 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.391 EAL: Restoring previous memory policy: 4 00:05:52.391 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.391 EAL: request: mp_malloc_sync 00:05:52.391 EAL: No shared files mode enabled, IPC is disabled 00:05:52.391 EAL: Heap on socket 0 was expanded by 18MB 00:05:52.391 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.391 EAL: request: mp_malloc_sync 00:05:52.391 EAL: No shared files mode enabled, IPC is disabled 00:05:52.391 EAL: Heap on socket 0 was shrunk by 18MB 00:05:52.391 EAL: Trying to obtain current memory policy. 00:05:52.391 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.391 EAL: Restoring previous memory policy: 4 00:05:52.391 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.391 EAL: request: mp_malloc_sync 00:05:52.391 EAL: No shared files mode enabled, IPC is disabled 00:05:52.391 EAL: Heap on socket 0 was expanded by 34MB 00:05:52.391 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.649 EAL: request: mp_malloc_sync 00:05:52.649 EAL: No shared files mode enabled, IPC is disabled 00:05:52.649 EAL: Heap on socket 0 was shrunk by 34MB 00:05:52.649 EAL: Trying to obtain current memory policy. 00:05:52.649 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.650 EAL: Restoring previous memory policy: 4 00:05:52.650 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.650 EAL: request: mp_malloc_sync 00:05:52.650 EAL: No shared files mode enabled, IPC is disabled 00:05:52.650 EAL: Heap on socket 0 was expanded by 66MB 00:05:52.650 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.650 EAL: request: mp_malloc_sync 00:05:52.650 EAL: No shared files mode enabled, IPC is disabled 00:05:52.650 EAL: Heap on socket 0 was shrunk by 66MB 00:05:52.924 EAL: Trying to obtain current memory policy. 00:05:52.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.924 EAL: Restoring previous memory policy: 4 00:05:52.924 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.924 EAL: request: mp_malloc_sync 00:05:52.924 EAL: No shared files mode enabled, IPC is disabled 00:05:52.924 EAL: Heap on socket 0 was expanded by 130MB 00:05:53.209 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.209 EAL: request: mp_malloc_sync 00:05:53.209 EAL: No shared files mode enabled, IPC is disabled 00:05:53.209 EAL: Heap on socket 0 was shrunk by 130MB 00:05:53.469 EAL: Trying to obtain current memory policy. 00:05:53.469 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.469 EAL: Restoring previous memory policy: 4 00:05:53.469 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.469 EAL: request: mp_malloc_sync 00:05:53.469 EAL: No shared files mode enabled, IPC is disabled 00:05:53.469 EAL: Heap on socket 0 was expanded by 258MB 00:05:54.037 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.037 EAL: request: mp_malloc_sync 00:05:54.037 EAL: No shared files mode enabled, IPC is disabled 00:05:54.037 EAL: Heap on socket 0 was shrunk by 258MB 00:05:54.605 EAL: Trying to obtain current memory policy. 00:05:54.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.605 EAL: Restoring previous memory policy: 4 00:05:54.605 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.605 EAL: request: mp_malloc_sync 00:05:54.605 EAL: No shared files mode enabled, IPC is disabled 00:05:54.605 EAL: Heap on socket 0 was expanded by 514MB 00:05:55.982 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.982 EAL: request: mp_malloc_sync 00:05:55.982 EAL: No shared files mode enabled, IPC is disabled 00:05:55.982 EAL: Heap on socket 0 was shrunk by 514MB 00:05:56.921 EAL: Trying to obtain current memory policy. 00:05:56.921 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.180 EAL: Restoring previous memory policy: 4 00:05:57.180 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.180 EAL: request: mp_malloc_sync 00:05:57.180 EAL: No shared files mode enabled, IPC is disabled 00:05:57.180 EAL: Heap on socket 0 was expanded by 1026MB 00:05:59.085 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.344 EAL: request: mp_malloc_sync 00:05:59.344 EAL: No shared files mode enabled, IPC is disabled 00:05:59.345 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:01.252 passed 00:06:01.252 00:06:01.252 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.252 suites 1 1 n/a 0 0 00:06:01.252 tests 2 2 2 0 0 00:06:01.252 asserts 5670 5670 5670 0 n/a 00:06:01.252 00:06:01.252 Elapsed time = 9.172 seconds 00:06:01.252 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.252 EAL: request: mp_malloc_sync 00:06:01.252 EAL: No shared files mode enabled, IPC is disabled 00:06:01.252 EAL: Heap on socket 0 was shrunk by 2MB 00:06:01.252 EAL: No shared files mode enabled, IPC is disabled 00:06:01.252 EAL: No shared files mode enabled, IPC is disabled 00:06:01.252 EAL: No shared files mode enabled, IPC is disabled 00:06:01.252 00:06:01.252 real 0m9.505s 00:06:01.252 user 0m8.061s 00:06:01.252 sys 0m1.284s 00:06:01.252 04:22:57 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.252 04:22:57 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:01.252 ************************************ 00:06:01.252 END TEST env_vtophys 00:06:01.252 ************************************ 00:06:01.252 04:22:57 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:01.252 04:22:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.252 04:22:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.252 04:22:57 env -- common/autotest_common.sh@10 -- # set +x 00:06:01.252 ************************************ 00:06:01.252 START TEST env_pci 00:06:01.252 ************************************ 00:06:01.252 04:22:57 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:01.252 00:06:01.252 00:06:01.252 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.252 http://cunit.sourceforge.net/ 00:06:01.252 00:06:01.252 00:06:01.252 Suite: pci 00:06:01.252 Test: pci_hook ...[2024-11-27 04:22:57.672706] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56848 has claimed it 00:06:01.252 passed 00:06:01.252 00:06:01.252 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.252 suites 1 1 n/a 0 0 00:06:01.252 tests 1 1 1 0 0 00:06:01.252 asserts 25 25 25 0 n/a 00:06:01.252 00:06:01.252 Elapsed time = 0.005 seconds 00:06:01.252 EAL: Cannot find device (10000:00:01.0) 00:06:01.252 EAL: Failed to attach device on primary process 00:06:01.252 00:06:01.252 real 0m0.110s 00:06:01.252 user 0m0.051s 00:06:01.252 sys 0m0.058s 00:06:01.252 04:22:57 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.252 04:22:57 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:01.252 ************************************ 00:06:01.252 END TEST env_pci 00:06:01.252 ************************************ 00:06:01.252 04:22:57 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:01.252 04:22:57 env -- env/env.sh@15 -- # uname 00:06:01.252 04:22:57 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:01.252 04:22:57 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:01.252 04:22:57 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:01.252 04:22:57 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:01.252 04:22:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.252 04:22:57 env -- common/autotest_common.sh@10 -- # set +x 00:06:01.252 ************************************ 00:06:01.252 START TEST env_dpdk_post_init 00:06:01.252 ************************************ 00:06:01.252 04:22:57 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:01.512 EAL: Detected CPU lcores: 10 00:06:01.512 EAL: Detected NUMA nodes: 1 00:06:01.512 EAL: Detected shared linkage of DPDK 00:06:01.512 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:01.512 EAL: Selected IOVA mode 'PA' 00:06:01.512 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:01.512 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:01.512 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:01.512 Starting DPDK initialization... 00:06:01.512 Starting SPDK post initialization... 00:06:01.512 SPDK NVMe probe 00:06:01.512 Attaching to 0000:00:10.0 00:06:01.512 Attaching to 0000:00:11.0 00:06:01.512 Attached to 0000:00:10.0 00:06:01.512 Attached to 0000:00:11.0 00:06:01.512 Cleaning up... 00:06:01.771 00:06:01.771 real 0m0.305s 00:06:01.771 user 0m0.099s 00:06:01.771 sys 0m0.106s 00:06:01.771 04:22:58 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.771 04:22:58 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:01.771 ************************************ 00:06:01.771 END TEST env_dpdk_post_init 00:06:01.771 ************************************ 00:06:01.771 04:22:58 env -- env/env.sh@26 -- # uname 00:06:01.771 04:22:58 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:01.771 04:22:58 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:01.771 04:22:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.771 04:22:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.771 04:22:58 env -- common/autotest_common.sh@10 -- # set +x 00:06:01.771 ************************************ 00:06:01.771 START TEST env_mem_callbacks 00:06:01.771 ************************************ 00:06:01.771 04:22:58 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:01.771 EAL: Detected CPU lcores: 10 00:06:01.771 EAL: Detected NUMA nodes: 1 00:06:01.771 EAL: Detected shared linkage of DPDK 00:06:01.771 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:01.771 EAL: Selected IOVA mode 'PA' 00:06:02.030 00:06:02.030 00:06:02.030 CUnit - A unit testing framework for C - Version 2.1-3 00:06:02.030 http://cunit.sourceforge.net/ 00:06:02.030 00:06:02.030 00:06:02.030 Suite: memory 00:06:02.030 Test: test ... 00:06:02.030 register 0x200000200000 2097152 00:06:02.030 malloc 3145728 00:06:02.030 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:02.030 register 0x200000400000 4194304 00:06:02.030 buf 0x2000004fffc0 len 3145728 PASSED 00:06:02.030 malloc 64 00:06:02.030 buf 0x2000004ffec0 len 64 PASSED 00:06:02.030 malloc 4194304 00:06:02.030 register 0x200000800000 6291456 00:06:02.030 buf 0x2000009fffc0 len 4194304 PASSED 00:06:02.030 free 0x2000004fffc0 3145728 00:06:02.030 free 0x2000004ffec0 64 00:06:02.030 unregister 0x200000400000 4194304 PASSED 00:06:02.030 free 0x2000009fffc0 4194304 00:06:02.030 unregister 0x200000800000 6291456 PASSED 00:06:02.030 malloc 8388608 00:06:02.030 register 0x200000400000 10485760 00:06:02.030 buf 0x2000005fffc0 len 8388608 PASSED 00:06:02.030 free 0x2000005fffc0 8388608 00:06:02.030 unregister 0x200000400000 10485760 PASSED 00:06:02.030 passed 00:06:02.030 00:06:02.030 Run Summary: Type Total Ran Passed Failed Inactive 00:06:02.030 suites 1 1 n/a 0 0 00:06:02.030 tests 1 1 1 0 0 00:06:02.030 asserts 15 15 15 0 n/a 00:06:02.030 00:06:02.030 Elapsed time = 0.080 seconds 00:06:02.030 00:06:02.030 real 0m0.275s 00:06:02.031 user 0m0.109s 00:06:02.031 sys 0m0.065s 00:06:02.031 04:22:58 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.031 04:22:58 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:02.031 ************************************ 00:06:02.031 END TEST env_mem_callbacks 00:06:02.031 ************************************ 00:06:02.031 00:06:02.031 real 0m11.060s 00:06:02.031 user 0m8.791s 00:06:02.031 sys 0m1.905s 00:06:02.031 04:22:58 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.031 04:22:58 env -- common/autotest_common.sh@10 -- # set +x 00:06:02.031 ************************************ 00:06:02.031 END TEST env 00:06:02.031 ************************************ 00:06:02.031 04:22:58 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:02.031 04:22:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.031 04:22:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.031 04:22:58 -- common/autotest_common.sh@10 -- # set +x 00:06:02.031 ************************************ 00:06:02.031 START TEST rpc 00:06:02.031 ************************************ 00:06:02.031 04:22:58 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:02.290 * Looking for test storage... 00:06:02.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:02.290 04:22:58 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:02.290 04:22:58 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:02.290 04:22:58 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:02.290 04:22:58 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:02.290 04:22:58 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.290 04:22:58 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.290 04:22:58 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.290 04:22:58 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.290 04:22:58 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.290 04:22:58 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.290 04:22:58 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.290 04:22:58 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.290 04:22:58 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.290 04:22:58 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.290 04:22:58 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.290 04:22:58 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:02.290 04:22:58 rpc -- scripts/common.sh@345 -- # : 1 00:06:02.290 04:22:58 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.290 04:22:58 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.290 04:22:58 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:02.290 04:22:58 rpc -- scripts/common.sh@353 -- # local d=1 00:06:02.290 04:22:58 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.290 04:22:58 rpc -- scripts/common.sh@355 -- # echo 1 00:06:02.290 04:22:58 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.290 04:22:58 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:02.290 04:22:58 rpc -- scripts/common.sh@353 -- # local d=2 00:06:02.290 04:22:58 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.290 04:22:58 rpc -- scripts/common.sh@355 -- # echo 2 00:06:02.290 04:22:58 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.290 04:22:58 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.290 04:22:58 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.290 04:22:58 rpc -- scripts/common.sh@368 -- # return 0 00:06:02.290 04:22:58 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.290 04:22:58 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:02.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.290 --rc genhtml_branch_coverage=1 00:06:02.290 --rc genhtml_function_coverage=1 00:06:02.290 --rc genhtml_legend=1 00:06:02.290 --rc geninfo_all_blocks=1 00:06:02.290 --rc geninfo_unexecuted_blocks=1 00:06:02.290 00:06:02.290 ' 00:06:02.290 04:22:58 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:02.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.290 --rc genhtml_branch_coverage=1 00:06:02.290 --rc genhtml_function_coverage=1 00:06:02.290 --rc genhtml_legend=1 00:06:02.290 --rc geninfo_all_blocks=1 00:06:02.290 --rc geninfo_unexecuted_blocks=1 00:06:02.290 00:06:02.290 ' 00:06:02.290 04:22:58 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:02.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.290 --rc genhtml_branch_coverage=1 00:06:02.290 --rc genhtml_function_coverage=1 00:06:02.290 --rc genhtml_legend=1 00:06:02.290 --rc geninfo_all_blocks=1 00:06:02.290 --rc geninfo_unexecuted_blocks=1 00:06:02.290 00:06:02.290 ' 00:06:02.290 04:22:58 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:02.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.290 --rc genhtml_branch_coverage=1 00:06:02.290 --rc genhtml_function_coverage=1 00:06:02.290 --rc genhtml_legend=1 00:06:02.290 --rc geninfo_all_blocks=1 00:06:02.290 --rc geninfo_unexecuted_blocks=1 00:06:02.290 00:06:02.290 ' 00:06:02.291 04:22:58 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:02.291 04:22:58 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56981 00:06:02.291 04:22:58 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.291 04:22:58 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56981 00:06:02.291 04:22:58 rpc -- common/autotest_common.sh@835 -- # '[' -z 56981 ']' 00:06:02.291 04:22:58 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.291 04:22:58 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.291 04:22:58 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.291 04:22:58 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.291 04:22:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.549 [2024-11-27 04:22:58.887172] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:02.549 [2024-11-27 04:22:58.887310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56981 ] 00:06:02.549 [2024-11-27 04:22:59.048546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.813 [2024-11-27 04:22:59.158752] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:02.813 [2024-11-27 04:22:59.158814] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56981' to capture a snapshot of events at runtime. 00:06:02.813 [2024-11-27 04:22:59.158824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:02.813 [2024-11-27 04:22:59.158849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:02.813 [2024-11-27 04:22:59.158856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56981 for offline analysis/debug. 00:06:02.813 [2024-11-27 04:22:59.160183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.748 04:22:59 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.748 04:22:59 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:03.748 04:22:59 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:03.748 04:22:59 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:03.748 04:22:59 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:03.748 04:22:59 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:03.748 04:22:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.748 04:22:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.748 04:22:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.748 ************************************ 00:06:03.748 START TEST rpc_integrity 00:06:03.748 ************************************ 00:06:03.748 04:22:59 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:03.748 04:22:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:03.748 04:23:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.748 04:23:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.748 04:23:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.748 04:23:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:03.748 04:23:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:03.748 04:23:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:03.748 04:23:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:03.748 04:23:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.748 04:23:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.748 04:23:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.748 04:23:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:03.748 04:23:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:03.748 04:23:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.748 04:23:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.748 04:23:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.748 04:23:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:03.748 { 00:06:03.748 "name": "Malloc0", 00:06:03.748 "aliases": [ 00:06:03.748 "17dcc4c7-7eb6-4c31-b12d-f6b56ac8eeb3" 00:06:03.748 ], 00:06:03.748 "product_name": "Malloc disk", 00:06:03.748 "block_size": 512, 00:06:03.748 "num_blocks": 16384, 00:06:03.748 "uuid": "17dcc4c7-7eb6-4c31-b12d-f6b56ac8eeb3", 00:06:03.748 "assigned_rate_limits": { 00:06:03.748 "rw_ios_per_sec": 0, 00:06:03.748 "rw_mbytes_per_sec": 0, 00:06:03.748 "r_mbytes_per_sec": 0, 00:06:03.748 "w_mbytes_per_sec": 0 00:06:03.748 }, 00:06:03.748 "claimed": false, 00:06:03.748 "zoned": false, 00:06:03.748 "supported_io_types": { 00:06:03.748 "read": true, 00:06:03.748 "write": true, 00:06:03.748 "unmap": true, 00:06:03.748 "flush": true, 00:06:03.748 "reset": true, 00:06:03.748 "nvme_admin": false, 00:06:03.748 "nvme_io": false, 00:06:03.748 "nvme_io_md": false, 00:06:03.748 "write_zeroes": true, 00:06:03.748 "zcopy": true, 00:06:03.748 "get_zone_info": false, 00:06:03.748 "zone_management": false, 00:06:03.748 "zone_append": false, 00:06:03.748 "compare": false, 00:06:03.748 "compare_and_write": false, 00:06:03.748 "abort": true, 00:06:03.748 "seek_hole": false, 00:06:03.748 "seek_data": false, 00:06:03.748 "copy": true, 00:06:03.748 "nvme_iov_md": false 00:06:03.748 }, 00:06:03.748 "memory_domains": [ 00:06:03.748 { 00:06:03.748 "dma_device_id": "system", 00:06:03.748 "dma_device_type": 1 00:06:03.748 }, 00:06:03.748 { 00:06:03.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.748 "dma_device_type": 2 00:06:03.748 } 00:06:03.748 ], 00:06:03.748 "driver_specific": {} 00:06:03.748 } 00:06:03.748 ]' 00:06:03.748 04:23:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:03.748 04:23:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:03.748 04:23:00 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:03.748 04:23:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.748 04:23:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.748 [2024-11-27 04:23:00.160615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:03.748 [2024-11-27 04:23:00.160693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:03.748 [2024-11-27 04:23:00.160718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:03.748 [2024-11-27 04:23:00.160733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:03.748 [2024-11-27 04:23:00.163163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:03.748 [2024-11-27 04:23:00.163205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:03.748 Passthru0 00:06:03.748 04:23:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.748 04:23:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:03.748 04:23:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.748 04:23:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.748 04:23:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.748 04:23:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:03.748 { 00:06:03.748 "name": "Malloc0", 00:06:03.748 "aliases": [ 00:06:03.748 "17dcc4c7-7eb6-4c31-b12d-f6b56ac8eeb3" 00:06:03.748 ], 00:06:03.748 "product_name": "Malloc disk", 00:06:03.748 "block_size": 512, 00:06:03.748 "num_blocks": 16384, 00:06:03.748 "uuid": "17dcc4c7-7eb6-4c31-b12d-f6b56ac8eeb3", 00:06:03.748 "assigned_rate_limits": { 00:06:03.748 "rw_ios_per_sec": 0, 00:06:03.748 "rw_mbytes_per_sec": 0, 00:06:03.748 "r_mbytes_per_sec": 0, 00:06:03.748 "w_mbytes_per_sec": 0 00:06:03.748 }, 00:06:03.748 "claimed": true, 00:06:03.748 "claim_type": "exclusive_write", 00:06:03.748 "zoned": false, 00:06:03.748 "supported_io_types": { 00:06:03.748 "read": true, 00:06:03.748 "write": true, 00:06:03.748 "unmap": true, 00:06:03.748 "flush": true, 00:06:03.748 "reset": true, 00:06:03.748 "nvme_admin": false, 00:06:03.748 "nvme_io": false, 00:06:03.748 "nvme_io_md": false, 00:06:03.748 "write_zeroes": true, 00:06:03.748 "zcopy": true, 00:06:03.748 "get_zone_info": false, 00:06:03.748 "zone_management": false, 00:06:03.748 "zone_append": false, 00:06:03.748 "compare": false, 00:06:03.748 "compare_and_write": false, 00:06:03.748 "abort": true, 00:06:03.748 "seek_hole": false, 00:06:03.748 "seek_data": false, 00:06:03.748 "copy": true, 00:06:03.748 "nvme_iov_md": false 00:06:03.748 }, 00:06:03.748 "memory_domains": [ 00:06:03.748 { 00:06:03.748 "dma_device_id": "system", 00:06:03.748 "dma_device_type": 1 00:06:03.748 }, 00:06:03.748 { 00:06:03.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.748 "dma_device_type": 2 00:06:03.748 } 00:06:03.748 ], 00:06:03.748 "driver_specific": {} 00:06:03.748 }, 00:06:03.748 { 00:06:03.748 "name": "Passthru0", 00:06:03.748 "aliases": [ 00:06:03.748 "0e824fe2-e5f3-5bbb-a27b-6c62a44c63f4" 00:06:03.748 ], 00:06:03.748 "product_name": "passthru", 00:06:03.748 "block_size": 512, 00:06:03.748 "num_blocks": 16384, 00:06:03.748 "uuid": "0e824fe2-e5f3-5bbb-a27b-6c62a44c63f4", 00:06:03.748 "assigned_rate_limits": { 00:06:03.748 "rw_ios_per_sec": 0, 00:06:03.748 "rw_mbytes_per_sec": 0, 00:06:03.748 "r_mbytes_per_sec": 0, 00:06:03.748 "w_mbytes_per_sec": 0 00:06:03.748 }, 00:06:03.748 "claimed": false, 00:06:03.748 "zoned": false, 00:06:03.748 "supported_io_types": { 00:06:03.748 "read": true, 00:06:03.748 "write": true, 00:06:03.748 "unmap": true, 00:06:03.748 "flush": true, 00:06:03.748 "reset": true, 00:06:03.748 "nvme_admin": false, 00:06:03.748 "nvme_io": false, 00:06:03.748 "nvme_io_md": false, 00:06:03.748 "write_zeroes": true, 00:06:03.748 "zcopy": true, 00:06:03.748 "get_zone_info": false, 00:06:03.748 "zone_management": false, 00:06:03.748 "zone_append": false, 00:06:03.748 "compare": false, 00:06:03.748 "compare_and_write": false, 00:06:03.748 "abort": true, 00:06:03.748 "seek_hole": false, 00:06:03.748 "seek_data": false, 00:06:03.748 "copy": true, 00:06:03.748 "nvme_iov_md": false 00:06:03.748 }, 00:06:03.748 "memory_domains": [ 00:06:03.748 { 00:06:03.748 "dma_device_id": "system", 00:06:03.748 "dma_device_type": 1 00:06:03.748 }, 00:06:03.748 { 00:06:03.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.748 "dma_device_type": 2 00:06:03.748 } 00:06:03.748 ], 00:06:03.748 "driver_specific": { 00:06:03.748 "passthru": { 00:06:03.748 "name": "Passthru0", 00:06:03.748 "base_bdev_name": "Malloc0" 00:06:03.748 } 00:06:03.748 } 00:06:03.748 } 00:06:03.748 ]' 00:06:03.748 04:23:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:03.748 04:23:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:03.748 04:23:00 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:03.748 04:23:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.748 04:23:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.748 04:23:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.748 04:23:00 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:03.748 04:23:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.748 04:23:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.748 04:23:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.748 04:23:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:03.748 04:23:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.748 04:23:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.748 04:23:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.748 04:23:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:03.748 04:23:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:04.007 04:23:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:04.007 00:06:04.007 real 0m0.351s 00:06:04.007 user 0m0.178s 00:06:04.007 sys 0m0.068s 00:06:04.007 04:23:00 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.007 04:23:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.007 ************************************ 00:06:04.007 END TEST rpc_integrity 00:06:04.007 ************************************ 00:06:04.007 04:23:00 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:04.007 04:23:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.007 04:23:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.007 04:23:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.007 ************************************ 00:06:04.007 START TEST rpc_plugins 00:06:04.007 ************************************ 00:06:04.007 04:23:00 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:04.007 04:23:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:04.007 04:23:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.007 04:23:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:04.007 04:23:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.007 04:23:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:04.007 04:23:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:04.007 04:23:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.007 04:23:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:04.007 04:23:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.007 04:23:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:04.007 { 00:06:04.007 "name": "Malloc1", 00:06:04.007 "aliases": [ 00:06:04.007 "ac66363a-f5cd-429d-849b-f3fcb402f12e" 00:06:04.007 ], 00:06:04.007 "product_name": "Malloc disk", 00:06:04.007 "block_size": 4096, 00:06:04.007 "num_blocks": 256, 00:06:04.007 "uuid": "ac66363a-f5cd-429d-849b-f3fcb402f12e", 00:06:04.007 "assigned_rate_limits": { 00:06:04.007 "rw_ios_per_sec": 0, 00:06:04.007 "rw_mbytes_per_sec": 0, 00:06:04.007 "r_mbytes_per_sec": 0, 00:06:04.007 "w_mbytes_per_sec": 0 00:06:04.007 }, 00:06:04.007 "claimed": false, 00:06:04.007 "zoned": false, 00:06:04.007 "supported_io_types": { 00:06:04.007 "read": true, 00:06:04.007 "write": true, 00:06:04.007 "unmap": true, 00:06:04.007 "flush": true, 00:06:04.007 "reset": true, 00:06:04.007 "nvme_admin": false, 00:06:04.007 "nvme_io": false, 00:06:04.007 "nvme_io_md": false, 00:06:04.007 "write_zeroes": true, 00:06:04.007 "zcopy": true, 00:06:04.007 "get_zone_info": false, 00:06:04.007 "zone_management": false, 00:06:04.007 "zone_append": false, 00:06:04.007 "compare": false, 00:06:04.007 "compare_and_write": false, 00:06:04.007 "abort": true, 00:06:04.007 "seek_hole": false, 00:06:04.007 "seek_data": false, 00:06:04.007 "copy": true, 00:06:04.007 "nvme_iov_md": false 00:06:04.007 }, 00:06:04.007 "memory_domains": [ 00:06:04.007 { 00:06:04.007 "dma_device_id": "system", 00:06:04.007 "dma_device_type": 1 00:06:04.007 }, 00:06:04.007 { 00:06:04.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.007 "dma_device_type": 2 00:06:04.007 } 00:06:04.007 ], 00:06:04.007 "driver_specific": {} 00:06:04.007 } 00:06:04.007 ]' 00:06:04.007 04:23:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:04.007 04:23:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:04.007 04:23:00 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:04.007 04:23:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.007 04:23:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:04.007 04:23:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.007 04:23:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:04.007 04:23:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.007 04:23:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:04.007 04:23:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.007 04:23:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:04.007 04:23:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:04.007 04:23:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:04.007 00:06:04.007 real 0m0.179s 00:06:04.007 user 0m0.101s 00:06:04.007 sys 0m0.027s 00:06:04.007 04:23:00 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.007 04:23:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:04.007 ************************************ 00:06:04.007 END TEST rpc_plugins 00:06:04.007 ************************************ 00:06:04.265 04:23:00 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:04.265 04:23:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.265 04:23:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.265 04:23:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.265 ************************************ 00:06:04.265 START TEST rpc_trace_cmd_test 00:06:04.265 ************************************ 00:06:04.265 04:23:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:04.265 04:23:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:04.265 04:23:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:04.265 04:23:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.265 04:23:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.265 04:23:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.265 04:23:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:04.265 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56981", 00:06:04.265 "tpoint_group_mask": "0x8", 00:06:04.265 "iscsi_conn": { 00:06:04.265 "mask": "0x2", 00:06:04.265 "tpoint_mask": "0x0" 00:06:04.265 }, 00:06:04.265 "scsi": { 00:06:04.265 "mask": "0x4", 00:06:04.265 "tpoint_mask": "0x0" 00:06:04.265 }, 00:06:04.265 "bdev": { 00:06:04.265 "mask": "0x8", 00:06:04.265 "tpoint_mask": "0xffffffffffffffff" 00:06:04.265 }, 00:06:04.265 "nvmf_rdma": { 00:06:04.265 "mask": "0x10", 00:06:04.265 "tpoint_mask": "0x0" 00:06:04.265 }, 00:06:04.265 "nvmf_tcp": { 00:06:04.265 "mask": "0x20", 00:06:04.265 "tpoint_mask": "0x0" 00:06:04.265 }, 00:06:04.265 "ftl": { 00:06:04.265 "mask": "0x40", 00:06:04.265 "tpoint_mask": "0x0" 00:06:04.265 }, 00:06:04.265 "blobfs": { 00:06:04.265 "mask": "0x80", 00:06:04.265 "tpoint_mask": "0x0" 00:06:04.265 }, 00:06:04.265 "dsa": { 00:06:04.265 "mask": "0x200", 00:06:04.265 "tpoint_mask": "0x0" 00:06:04.265 }, 00:06:04.265 "thread": { 00:06:04.265 "mask": "0x400", 00:06:04.265 "tpoint_mask": "0x0" 00:06:04.265 }, 00:06:04.265 "nvme_pcie": { 00:06:04.265 "mask": "0x800", 00:06:04.265 "tpoint_mask": "0x0" 00:06:04.265 }, 00:06:04.265 "iaa": { 00:06:04.265 "mask": "0x1000", 00:06:04.265 "tpoint_mask": "0x0" 00:06:04.265 }, 00:06:04.265 "nvme_tcp": { 00:06:04.265 "mask": "0x2000", 00:06:04.265 "tpoint_mask": "0x0" 00:06:04.265 }, 00:06:04.265 "bdev_nvme": { 00:06:04.265 "mask": "0x4000", 00:06:04.265 "tpoint_mask": "0x0" 00:06:04.265 }, 00:06:04.265 "sock": { 00:06:04.265 "mask": "0x8000", 00:06:04.265 "tpoint_mask": "0x0" 00:06:04.265 }, 00:06:04.265 "blob": { 00:06:04.265 "mask": "0x10000", 00:06:04.265 "tpoint_mask": "0x0" 00:06:04.265 }, 00:06:04.265 "bdev_raid": { 00:06:04.265 "mask": "0x20000", 00:06:04.265 "tpoint_mask": "0x0" 00:06:04.265 }, 00:06:04.265 "scheduler": { 00:06:04.265 "mask": "0x40000", 00:06:04.265 "tpoint_mask": "0x0" 00:06:04.265 } 00:06:04.265 }' 00:06:04.265 04:23:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:04.265 04:23:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:04.265 04:23:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:04.265 04:23:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:04.265 04:23:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:04.265 04:23:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:04.265 04:23:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:04.524 04:23:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:04.524 04:23:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:04.524 04:23:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:04.524 00:06:04.524 real 0m0.263s 00:06:04.524 user 0m0.204s 00:06:04.524 sys 0m0.049s 00:06:04.524 04:23:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.524 04:23:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.524 ************************************ 00:06:04.524 END TEST rpc_trace_cmd_test 00:06:04.524 ************************************ 00:06:04.524 04:23:00 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:04.524 04:23:00 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:04.524 04:23:00 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:04.524 04:23:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.524 04:23:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.524 04:23:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.524 ************************************ 00:06:04.524 START TEST rpc_daemon_integrity 00:06:04.524 ************************************ 00:06:04.524 04:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:04.524 04:23:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:04.524 04:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.524 04:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.524 04:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.524 04:23:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:04.524 04:23:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:04.525 04:23:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:04.525 04:23:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:04.525 04:23:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.525 04:23:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.525 04:23:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.525 04:23:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:04.525 04:23:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:04.525 04:23:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.525 04:23:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.525 04:23:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.525 04:23:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:04.525 { 00:06:04.525 "name": "Malloc2", 00:06:04.525 "aliases": [ 00:06:04.525 "24d4d803-ef08-407f-bc9f-db1edba89eb9" 00:06:04.525 ], 00:06:04.525 "product_name": "Malloc disk", 00:06:04.525 "block_size": 512, 00:06:04.525 "num_blocks": 16384, 00:06:04.525 "uuid": "24d4d803-ef08-407f-bc9f-db1edba89eb9", 00:06:04.525 "assigned_rate_limits": { 00:06:04.525 "rw_ios_per_sec": 0, 00:06:04.525 "rw_mbytes_per_sec": 0, 00:06:04.525 "r_mbytes_per_sec": 0, 00:06:04.525 "w_mbytes_per_sec": 0 00:06:04.525 }, 00:06:04.525 "claimed": false, 00:06:04.525 "zoned": false, 00:06:04.525 "supported_io_types": { 00:06:04.525 "read": true, 00:06:04.525 "write": true, 00:06:04.525 "unmap": true, 00:06:04.525 "flush": true, 00:06:04.525 "reset": true, 00:06:04.525 "nvme_admin": false, 00:06:04.525 "nvme_io": false, 00:06:04.525 "nvme_io_md": false, 00:06:04.525 "write_zeroes": true, 00:06:04.525 "zcopy": true, 00:06:04.525 "get_zone_info": false, 00:06:04.525 "zone_management": false, 00:06:04.525 "zone_append": false, 00:06:04.525 "compare": false, 00:06:04.525 "compare_and_write": false, 00:06:04.525 "abort": true, 00:06:04.525 "seek_hole": false, 00:06:04.525 "seek_data": false, 00:06:04.525 "copy": true, 00:06:04.525 "nvme_iov_md": false 00:06:04.525 }, 00:06:04.525 "memory_domains": [ 00:06:04.525 { 00:06:04.525 "dma_device_id": "system", 00:06:04.525 "dma_device_type": 1 00:06:04.525 }, 00:06:04.525 { 00:06:04.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.525 "dma_device_type": 2 00:06:04.525 } 00:06:04.525 ], 00:06:04.525 "driver_specific": {} 00:06:04.525 } 00:06:04.525 ]' 00:06:04.525 04:23:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:04.784 04:23:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:04.784 04:23:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:04.784 04:23:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.784 04:23:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.784 [2024-11-27 04:23:01.142576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:04.784 [2024-11-27 04:23:01.142658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:04.784 [2024-11-27 04:23:01.142682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:04.784 [2024-11-27 04:23:01.142695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:04.784 [2024-11-27 04:23:01.145113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:04.784 [2024-11-27 04:23:01.145161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:04.784 Passthru0 00:06:04.784 04:23:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.784 04:23:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:04.784 04:23:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.784 04:23:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.784 04:23:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.784 04:23:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:04.784 { 00:06:04.784 "name": "Malloc2", 00:06:04.784 "aliases": [ 00:06:04.784 "24d4d803-ef08-407f-bc9f-db1edba89eb9" 00:06:04.784 ], 00:06:04.784 "product_name": "Malloc disk", 00:06:04.784 "block_size": 512, 00:06:04.784 "num_blocks": 16384, 00:06:04.784 "uuid": "24d4d803-ef08-407f-bc9f-db1edba89eb9", 00:06:04.784 "assigned_rate_limits": { 00:06:04.784 "rw_ios_per_sec": 0, 00:06:04.784 "rw_mbytes_per_sec": 0, 00:06:04.784 "r_mbytes_per_sec": 0, 00:06:04.784 "w_mbytes_per_sec": 0 00:06:04.784 }, 00:06:04.784 "claimed": true, 00:06:04.784 "claim_type": "exclusive_write", 00:06:04.784 "zoned": false, 00:06:04.784 "supported_io_types": { 00:06:04.784 "read": true, 00:06:04.784 "write": true, 00:06:04.784 "unmap": true, 00:06:04.784 "flush": true, 00:06:04.784 "reset": true, 00:06:04.784 "nvme_admin": false, 00:06:04.784 "nvme_io": false, 00:06:04.784 "nvme_io_md": false, 00:06:04.784 "write_zeroes": true, 00:06:04.784 "zcopy": true, 00:06:04.784 "get_zone_info": false, 00:06:04.784 "zone_management": false, 00:06:04.784 "zone_append": false, 00:06:04.784 "compare": false, 00:06:04.784 "compare_and_write": false, 00:06:04.784 "abort": true, 00:06:04.784 "seek_hole": false, 00:06:04.784 "seek_data": false, 00:06:04.784 "copy": true, 00:06:04.784 "nvme_iov_md": false 00:06:04.784 }, 00:06:04.784 "memory_domains": [ 00:06:04.784 { 00:06:04.784 "dma_device_id": "system", 00:06:04.784 "dma_device_type": 1 00:06:04.784 }, 00:06:04.784 { 00:06:04.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.784 "dma_device_type": 2 00:06:04.784 } 00:06:04.784 ], 00:06:04.784 "driver_specific": {} 00:06:04.784 }, 00:06:04.784 { 00:06:04.784 "name": "Passthru0", 00:06:04.784 "aliases": [ 00:06:04.784 "7c23f62e-a9b2-52a7-9ada-2d00e127c0ec" 00:06:04.784 ], 00:06:04.784 "product_name": "passthru", 00:06:04.784 "block_size": 512, 00:06:04.784 "num_blocks": 16384, 00:06:04.784 "uuid": "7c23f62e-a9b2-52a7-9ada-2d00e127c0ec", 00:06:04.784 "assigned_rate_limits": { 00:06:04.784 "rw_ios_per_sec": 0, 00:06:04.784 "rw_mbytes_per_sec": 0, 00:06:04.784 "r_mbytes_per_sec": 0, 00:06:04.784 "w_mbytes_per_sec": 0 00:06:04.784 }, 00:06:04.784 "claimed": false, 00:06:04.784 "zoned": false, 00:06:04.784 "supported_io_types": { 00:06:04.784 "read": true, 00:06:04.784 "write": true, 00:06:04.784 "unmap": true, 00:06:04.784 "flush": true, 00:06:04.784 "reset": true, 00:06:04.784 "nvme_admin": false, 00:06:04.784 "nvme_io": false, 00:06:04.784 "nvme_io_md": false, 00:06:04.784 "write_zeroes": true, 00:06:04.784 "zcopy": true, 00:06:04.784 "get_zone_info": false, 00:06:04.784 "zone_management": false, 00:06:04.784 "zone_append": false, 00:06:04.784 "compare": false, 00:06:04.784 "compare_and_write": false, 00:06:04.784 "abort": true, 00:06:04.784 "seek_hole": false, 00:06:04.784 "seek_data": false, 00:06:04.784 "copy": true, 00:06:04.784 "nvme_iov_md": false 00:06:04.784 }, 00:06:04.784 "memory_domains": [ 00:06:04.784 { 00:06:04.784 "dma_device_id": "system", 00:06:04.784 "dma_device_type": 1 00:06:04.784 }, 00:06:04.784 { 00:06:04.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.784 "dma_device_type": 2 00:06:04.784 } 00:06:04.784 ], 00:06:04.784 "driver_specific": { 00:06:04.784 "passthru": { 00:06:04.784 "name": "Passthru0", 00:06:04.784 "base_bdev_name": "Malloc2" 00:06:04.784 } 00:06:04.784 } 00:06:04.784 } 00:06:04.784 ]' 00:06:04.784 04:23:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:04.784 04:23:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:04.784 04:23:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:04.784 04:23:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.784 04:23:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.784 04:23:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.784 04:23:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:04.785 04:23:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.785 04:23:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.785 04:23:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.785 04:23:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:04.785 04:23:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.785 04:23:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.785 04:23:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.785 04:23:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:04.785 04:23:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:04.785 04:23:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:04.785 00:06:04.785 real 0m0.363s 00:06:04.785 user 0m0.198s 00:06:04.785 sys 0m0.061s 00:06:04.785 04:23:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.785 04:23:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.785 ************************************ 00:06:04.785 END TEST rpc_daemon_integrity 00:06:04.785 ************************************ 00:06:05.043 04:23:01 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:05.043 04:23:01 rpc -- rpc/rpc.sh@84 -- # killprocess 56981 00:06:05.043 04:23:01 rpc -- common/autotest_common.sh@954 -- # '[' -z 56981 ']' 00:06:05.043 04:23:01 rpc -- common/autotest_common.sh@958 -- # kill -0 56981 00:06:05.043 04:23:01 rpc -- common/autotest_common.sh@959 -- # uname 00:06:05.043 04:23:01 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.043 04:23:01 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56981 00:06:05.043 04:23:01 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.043 04:23:01 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.043 killing process with pid 56981 00:06:05.044 04:23:01 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56981' 00:06:05.044 04:23:01 rpc -- common/autotest_common.sh@973 -- # kill 56981 00:06:05.044 04:23:01 rpc -- common/autotest_common.sh@978 -- # wait 56981 00:06:07.603 00:06:07.603 real 0m5.121s 00:06:07.603 user 0m5.668s 00:06:07.603 sys 0m0.948s 00:06:07.603 04:23:03 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.603 04:23:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.603 ************************************ 00:06:07.603 END TEST rpc 00:06:07.603 ************************************ 00:06:07.603 04:23:03 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:07.603 04:23:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.603 04:23:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.603 04:23:03 -- common/autotest_common.sh@10 -- # set +x 00:06:07.603 ************************************ 00:06:07.603 START TEST skip_rpc 00:06:07.603 ************************************ 00:06:07.603 04:23:03 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:07.603 * Looking for test storage... 00:06:07.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:07.603 04:23:03 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.603 04:23:03 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.603 04:23:03 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:07.603 04:23:03 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.603 04:23:03 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:07.603 04:23:03 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.603 04:23:03 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:07.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.603 --rc genhtml_branch_coverage=1 00:06:07.603 --rc genhtml_function_coverage=1 00:06:07.603 --rc genhtml_legend=1 00:06:07.603 --rc geninfo_all_blocks=1 00:06:07.603 --rc geninfo_unexecuted_blocks=1 00:06:07.603 00:06:07.603 ' 00:06:07.603 04:23:03 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:07.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.603 --rc genhtml_branch_coverage=1 00:06:07.603 --rc genhtml_function_coverage=1 00:06:07.603 --rc genhtml_legend=1 00:06:07.603 --rc geninfo_all_blocks=1 00:06:07.603 --rc geninfo_unexecuted_blocks=1 00:06:07.603 00:06:07.603 ' 00:06:07.603 04:23:03 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:07.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.603 --rc genhtml_branch_coverage=1 00:06:07.603 --rc genhtml_function_coverage=1 00:06:07.603 --rc genhtml_legend=1 00:06:07.603 --rc geninfo_all_blocks=1 00:06:07.603 --rc geninfo_unexecuted_blocks=1 00:06:07.603 00:06:07.603 ' 00:06:07.603 04:23:03 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:07.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.603 --rc genhtml_branch_coverage=1 00:06:07.603 --rc genhtml_function_coverage=1 00:06:07.603 --rc genhtml_legend=1 00:06:07.603 --rc geninfo_all_blocks=1 00:06:07.603 --rc geninfo_unexecuted_blocks=1 00:06:07.603 00:06:07.603 ' 00:06:07.603 04:23:03 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:07.603 04:23:03 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:07.603 04:23:03 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:07.603 04:23:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.603 04:23:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.603 04:23:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.603 ************************************ 00:06:07.603 START TEST skip_rpc 00:06:07.603 ************************************ 00:06:07.603 04:23:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:07.603 04:23:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57210 00:06:07.603 04:23:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:07.603 04:23:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.603 04:23:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:07.603 [2024-11-27 04:23:04.098009] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:07.603 [2024-11-27 04:23:04.098147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57210 ] 00:06:07.863 [2024-11-27 04:23:04.259462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.863 [2024-11-27 04:23:04.372600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57210 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57210 ']' 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57210 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57210 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.154 killing process with pid 57210 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57210' 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57210 00:06:13.154 04:23:09 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57210 00:06:15.061 00:06:15.061 real 0m7.434s 00:06:15.061 user 0m6.971s 00:06:15.061 sys 0m0.387s 00:06:15.061 04:23:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.061 04:23:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.061 ************************************ 00:06:15.061 END TEST skip_rpc 00:06:15.061 ************************************ 00:06:15.061 04:23:11 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:15.061 04:23:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.061 04:23:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.061 04:23:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.061 ************************************ 00:06:15.061 START TEST skip_rpc_with_json 00:06:15.061 ************************************ 00:06:15.061 04:23:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:15.061 04:23:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:15.061 04:23:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57314 00:06:15.061 04:23:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.061 04:23:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.061 04:23:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57314 00:06:15.061 04:23:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57314 ']' 00:06:15.061 04:23:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.061 04:23:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.061 04:23:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.061 04:23:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.061 04:23:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:15.061 [2024-11-27 04:23:11.597723] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:15.061 [2024-11-27 04:23:11.597853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57314 ] 00:06:15.321 [2024-11-27 04:23:11.753182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.321 [2024-11-27 04:23:11.894946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.702 04:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.702 04:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:16.702 04:23:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:16.702 04:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.702 04:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:16.702 [2024-11-27 04:23:12.926786] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:16.702 request: 00:06:16.702 { 00:06:16.702 "trtype": "tcp", 00:06:16.702 "method": "nvmf_get_transports", 00:06:16.702 "req_id": 1 00:06:16.702 } 00:06:16.702 Got JSON-RPC error response 00:06:16.702 response: 00:06:16.702 { 00:06:16.702 "code": -19, 00:06:16.702 "message": "No such device" 00:06:16.702 } 00:06:16.702 04:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:16.702 04:23:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:16.702 04:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.702 04:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:16.702 [2024-11-27 04:23:12.938870] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.702 04:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.702 04:23:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:16.702 04:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.702 04:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:16.702 04:23:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.702 04:23:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:16.702 { 00:06:16.702 "subsystems": [ 00:06:16.702 { 00:06:16.702 "subsystem": "fsdev", 00:06:16.702 "config": [ 00:06:16.702 { 00:06:16.702 "method": "fsdev_set_opts", 00:06:16.702 "params": { 00:06:16.702 "fsdev_io_pool_size": 65535, 00:06:16.702 "fsdev_io_cache_size": 256 00:06:16.702 } 00:06:16.702 } 00:06:16.702 ] 00:06:16.702 }, 00:06:16.702 { 00:06:16.702 "subsystem": "keyring", 00:06:16.702 "config": [] 00:06:16.702 }, 00:06:16.702 { 00:06:16.702 "subsystem": "iobuf", 00:06:16.702 "config": [ 00:06:16.702 { 00:06:16.702 "method": "iobuf_set_options", 00:06:16.702 "params": { 00:06:16.702 "small_pool_count": 8192, 00:06:16.702 "large_pool_count": 1024, 00:06:16.702 "small_bufsize": 8192, 00:06:16.702 "large_bufsize": 135168, 00:06:16.702 "enable_numa": false 00:06:16.702 } 00:06:16.702 } 00:06:16.702 ] 00:06:16.702 }, 00:06:16.702 { 00:06:16.702 "subsystem": "sock", 00:06:16.702 "config": [ 00:06:16.702 { 00:06:16.702 "method": "sock_set_default_impl", 00:06:16.702 "params": { 00:06:16.702 "impl_name": "posix" 00:06:16.702 } 00:06:16.702 }, 00:06:16.702 { 00:06:16.702 "method": "sock_impl_set_options", 00:06:16.702 "params": { 00:06:16.702 "impl_name": "ssl", 00:06:16.702 "recv_buf_size": 4096, 00:06:16.702 "send_buf_size": 4096, 00:06:16.702 "enable_recv_pipe": true, 00:06:16.702 "enable_quickack": false, 00:06:16.702 "enable_placement_id": 0, 00:06:16.702 "enable_zerocopy_send_server": true, 00:06:16.702 "enable_zerocopy_send_client": false, 00:06:16.702 "zerocopy_threshold": 0, 00:06:16.702 "tls_version": 0, 00:06:16.702 "enable_ktls": false 00:06:16.702 } 00:06:16.702 }, 00:06:16.702 { 00:06:16.702 "method": "sock_impl_set_options", 00:06:16.702 "params": { 00:06:16.702 "impl_name": "posix", 00:06:16.702 "recv_buf_size": 2097152, 00:06:16.702 "send_buf_size": 2097152, 00:06:16.702 "enable_recv_pipe": true, 00:06:16.702 "enable_quickack": false, 00:06:16.702 "enable_placement_id": 0, 00:06:16.702 "enable_zerocopy_send_server": true, 00:06:16.702 "enable_zerocopy_send_client": false, 00:06:16.702 "zerocopy_threshold": 0, 00:06:16.702 "tls_version": 0, 00:06:16.702 "enable_ktls": false 00:06:16.702 } 00:06:16.702 } 00:06:16.702 ] 00:06:16.702 }, 00:06:16.702 { 00:06:16.702 "subsystem": "vmd", 00:06:16.702 "config": [] 00:06:16.702 }, 00:06:16.702 { 00:06:16.702 "subsystem": "accel", 00:06:16.702 "config": [ 00:06:16.702 { 00:06:16.702 "method": "accel_set_options", 00:06:16.702 "params": { 00:06:16.702 "small_cache_size": 128, 00:06:16.702 "large_cache_size": 16, 00:06:16.702 "task_count": 2048, 00:06:16.702 "sequence_count": 2048, 00:06:16.702 "buf_count": 2048 00:06:16.702 } 00:06:16.702 } 00:06:16.702 ] 00:06:16.702 }, 00:06:16.702 { 00:06:16.702 "subsystem": "bdev", 00:06:16.702 "config": [ 00:06:16.702 { 00:06:16.702 "method": "bdev_set_options", 00:06:16.702 "params": { 00:06:16.702 "bdev_io_pool_size": 65535, 00:06:16.702 "bdev_io_cache_size": 256, 00:06:16.702 "bdev_auto_examine": true, 00:06:16.702 "iobuf_small_cache_size": 128, 00:06:16.702 "iobuf_large_cache_size": 16 00:06:16.702 } 00:06:16.702 }, 00:06:16.702 { 00:06:16.702 "method": "bdev_raid_set_options", 00:06:16.702 "params": { 00:06:16.702 "process_window_size_kb": 1024, 00:06:16.702 "process_max_bandwidth_mb_sec": 0 00:06:16.702 } 00:06:16.702 }, 00:06:16.702 { 00:06:16.702 "method": "bdev_iscsi_set_options", 00:06:16.702 "params": { 00:06:16.702 "timeout_sec": 30 00:06:16.702 } 00:06:16.702 }, 00:06:16.702 { 00:06:16.702 "method": "bdev_nvme_set_options", 00:06:16.702 "params": { 00:06:16.702 "action_on_timeout": "none", 00:06:16.702 "timeout_us": 0, 00:06:16.702 "timeout_admin_us": 0, 00:06:16.702 "keep_alive_timeout_ms": 10000, 00:06:16.702 "arbitration_burst": 0, 00:06:16.702 "low_priority_weight": 0, 00:06:16.702 "medium_priority_weight": 0, 00:06:16.702 "high_priority_weight": 0, 00:06:16.702 "nvme_adminq_poll_period_us": 10000, 00:06:16.702 "nvme_ioq_poll_period_us": 0, 00:06:16.702 "io_queue_requests": 0, 00:06:16.702 "delay_cmd_submit": true, 00:06:16.702 "transport_retry_count": 4, 00:06:16.702 "bdev_retry_count": 3, 00:06:16.702 "transport_ack_timeout": 0, 00:06:16.702 "ctrlr_loss_timeout_sec": 0, 00:06:16.702 "reconnect_delay_sec": 0, 00:06:16.702 "fast_io_fail_timeout_sec": 0, 00:06:16.702 "disable_auto_failback": false, 00:06:16.702 "generate_uuids": false, 00:06:16.702 "transport_tos": 0, 00:06:16.702 "nvme_error_stat": false, 00:06:16.703 "rdma_srq_size": 0, 00:06:16.703 "io_path_stat": false, 00:06:16.703 "allow_accel_sequence": false, 00:06:16.703 "rdma_max_cq_size": 0, 00:06:16.703 "rdma_cm_event_timeout_ms": 0, 00:06:16.703 "dhchap_digests": [ 00:06:16.703 "sha256", 00:06:16.703 "sha384", 00:06:16.703 "sha512" 00:06:16.703 ], 00:06:16.703 "dhchap_dhgroups": [ 00:06:16.703 "null", 00:06:16.703 "ffdhe2048", 00:06:16.703 "ffdhe3072", 00:06:16.703 "ffdhe4096", 00:06:16.703 "ffdhe6144", 00:06:16.703 "ffdhe8192" 00:06:16.703 ] 00:06:16.703 } 00:06:16.703 }, 00:06:16.703 { 00:06:16.703 "method": "bdev_nvme_set_hotplug", 00:06:16.703 "params": { 00:06:16.703 "period_us": 100000, 00:06:16.703 "enable": false 00:06:16.703 } 00:06:16.703 }, 00:06:16.703 { 00:06:16.703 "method": "bdev_wait_for_examine" 00:06:16.703 } 00:06:16.703 ] 00:06:16.703 }, 00:06:16.703 { 00:06:16.703 "subsystem": "scsi", 00:06:16.703 "config": null 00:06:16.703 }, 00:06:16.703 { 00:06:16.703 "subsystem": "scheduler", 00:06:16.703 "config": [ 00:06:16.703 { 00:06:16.703 "method": "framework_set_scheduler", 00:06:16.703 "params": { 00:06:16.703 "name": "static" 00:06:16.703 } 00:06:16.703 } 00:06:16.703 ] 00:06:16.703 }, 00:06:16.703 { 00:06:16.703 "subsystem": "vhost_scsi", 00:06:16.703 "config": [] 00:06:16.703 }, 00:06:16.703 { 00:06:16.703 "subsystem": "vhost_blk", 00:06:16.703 "config": [] 00:06:16.703 }, 00:06:16.703 { 00:06:16.703 "subsystem": "ublk", 00:06:16.703 "config": [] 00:06:16.703 }, 00:06:16.703 { 00:06:16.703 "subsystem": "nbd", 00:06:16.703 "config": [] 00:06:16.703 }, 00:06:16.703 { 00:06:16.703 "subsystem": "nvmf", 00:06:16.703 "config": [ 00:06:16.703 { 00:06:16.703 "method": "nvmf_set_config", 00:06:16.703 "params": { 00:06:16.703 "discovery_filter": "match_any", 00:06:16.703 "admin_cmd_passthru": { 00:06:16.703 "identify_ctrlr": false 00:06:16.703 }, 00:06:16.703 "dhchap_digests": [ 00:06:16.703 "sha256", 00:06:16.703 "sha384", 00:06:16.703 "sha512" 00:06:16.703 ], 00:06:16.703 "dhchap_dhgroups": [ 00:06:16.703 "null", 00:06:16.703 "ffdhe2048", 00:06:16.703 "ffdhe3072", 00:06:16.703 "ffdhe4096", 00:06:16.703 "ffdhe6144", 00:06:16.703 "ffdhe8192" 00:06:16.703 ] 00:06:16.703 } 00:06:16.703 }, 00:06:16.703 { 00:06:16.703 "method": "nvmf_set_max_subsystems", 00:06:16.703 "params": { 00:06:16.703 "max_subsystems": 1024 00:06:16.703 } 00:06:16.703 }, 00:06:16.703 { 00:06:16.703 "method": "nvmf_set_crdt", 00:06:16.703 "params": { 00:06:16.703 "crdt1": 0, 00:06:16.703 "crdt2": 0, 00:06:16.703 "crdt3": 0 00:06:16.703 } 00:06:16.703 }, 00:06:16.703 { 00:06:16.703 "method": "nvmf_create_transport", 00:06:16.703 "params": { 00:06:16.703 "trtype": "TCP", 00:06:16.703 "max_queue_depth": 128, 00:06:16.703 "max_io_qpairs_per_ctrlr": 127, 00:06:16.703 "in_capsule_data_size": 4096, 00:06:16.703 "max_io_size": 131072, 00:06:16.703 "io_unit_size": 131072, 00:06:16.703 "max_aq_depth": 128, 00:06:16.703 "num_shared_buffers": 511, 00:06:16.703 "buf_cache_size": 4294967295, 00:06:16.703 "dif_insert_or_strip": false, 00:06:16.703 "zcopy": false, 00:06:16.703 "c2h_success": true, 00:06:16.703 "sock_priority": 0, 00:06:16.703 "abort_timeout_sec": 1, 00:06:16.703 "ack_timeout": 0, 00:06:16.703 "data_wr_pool_size": 0 00:06:16.703 } 00:06:16.703 } 00:06:16.703 ] 00:06:16.703 }, 00:06:16.703 { 00:06:16.703 "subsystem": "iscsi", 00:06:16.703 "config": [ 00:06:16.703 { 00:06:16.703 "method": "iscsi_set_options", 00:06:16.703 "params": { 00:06:16.703 "node_base": "iqn.2016-06.io.spdk", 00:06:16.703 "max_sessions": 128, 00:06:16.703 "max_connections_per_session": 2, 00:06:16.703 "max_queue_depth": 64, 00:06:16.703 "default_time2wait": 2, 00:06:16.703 "default_time2retain": 20, 00:06:16.703 "first_burst_length": 8192, 00:06:16.703 "immediate_data": true, 00:06:16.703 "allow_duplicated_isid": false, 00:06:16.703 "error_recovery_level": 0, 00:06:16.703 "nop_timeout": 60, 00:06:16.703 "nop_in_interval": 30, 00:06:16.703 "disable_chap": false, 00:06:16.703 "require_chap": false, 00:06:16.703 "mutual_chap": false, 00:06:16.703 "chap_group": 0, 00:06:16.703 "max_large_datain_per_connection": 64, 00:06:16.703 "max_r2t_per_connection": 4, 00:06:16.703 "pdu_pool_size": 36864, 00:06:16.703 "immediate_data_pool_size": 16384, 00:06:16.703 "data_out_pool_size": 2048 00:06:16.703 } 00:06:16.703 } 00:06:16.703 ] 00:06:16.703 } 00:06:16.703 ] 00:06:16.703 } 00:06:16.703 04:23:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:16.703 04:23:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57314 00:06:16.703 04:23:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57314 ']' 00:06:16.703 04:23:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57314 00:06:16.703 04:23:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:16.703 04:23:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.703 04:23:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57314 00:06:16.703 04:23:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.703 04:23:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.703 04:23:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57314' 00:06:16.703 killing process with pid 57314 00:06:16.703 04:23:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57314 00:06:16.703 04:23:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57314 00:06:19.241 04:23:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57370 00:06:19.241 04:23:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:19.241 04:23:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:24.527 04:23:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57370 00:06:24.527 04:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57370 ']' 00:06:24.527 04:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57370 00:06:24.527 04:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:24.527 04:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.527 04:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57370 00:06:24.527 04:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.527 04:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.527 killing process with pid 57370 00:06:24.527 04:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57370' 00:06:24.527 04:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57370 00:06:24.527 04:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57370 00:06:27.066 04:23:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:27.066 04:23:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:27.066 00:06:27.066 real 0m12.027s 00:06:27.066 user 0m11.155s 00:06:27.066 sys 0m1.164s 00:06:27.066 04:23:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.066 04:23:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:27.066 ************************************ 00:06:27.066 END TEST skip_rpc_with_json 00:06:27.066 ************************************ 00:06:27.066 04:23:23 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:27.066 04:23:23 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.066 04:23:23 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.066 04:23:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.066 ************************************ 00:06:27.066 START TEST skip_rpc_with_delay 00:06:27.066 ************************************ 00:06:27.066 04:23:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:27.066 04:23:23 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:27.066 04:23:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:27.066 04:23:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:27.066 04:23:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:27.066 04:23:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.066 04:23:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:27.066 04:23:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.066 04:23:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:27.066 04:23:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.066 04:23:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:27.066 04:23:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:27.066 04:23:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:27.327 [2024-11-27 04:23:23.705239] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:27.327 04:23:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:27.327 04:23:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.327 04:23:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:27.327 04:23:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.327 00:06:27.327 real 0m0.187s 00:06:27.327 user 0m0.088s 00:06:27.327 sys 0m0.098s 00:06:27.327 04:23:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.327 04:23:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:27.327 ************************************ 00:06:27.327 END TEST skip_rpc_with_delay 00:06:27.327 ************************************ 00:06:27.327 04:23:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:27.327 04:23:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:27.327 04:23:23 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:27.327 04:23:23 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.327 04:23:23 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.327 04:23:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.327 ************************************ 00:06:27.327 START TEST exit_on_failed_rpc_init 00:06:27.327 ************************************ 00:06:27.327 04:23:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:27.327 04:23:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57509 00:06:27.327 04:23:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.327 04:23:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57509 00:06:27.327 04:23:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57509 ']' 00:06:27.327 04:23:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.327 04:23:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.327 04:23:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.327 04:23:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.327 04:23:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:27.587 [2024-11-27 04:23:23.960962] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:27.587 [2024-11-27 04:23:23.961096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57509 ] 00:06:27.587 [2024-11-27 04:23:24.140106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.847 [2024-11-27 04:23:24.280318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.787 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.787 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:28.787 04:23:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:28.787 04:23:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:28.787 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:28.787 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:28.787 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:28.787 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.787 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:28.787 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.787 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:28.787 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.787 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:28.787 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:28.787 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:29.046 [2024-11-27 04:23:25.403464] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:29.046 [2024-11-27 04:23:25.403572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57527 ] 00:06:29.046 [2024-11-27 04:23:25.576371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.305 [2024-11-27 04:23:25.691018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.305 [2024-11-27 04:23:25.691114] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:29.305 [2024-11-27 04:23:25.691129] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:29.305 [2024-11-27 04:23:25.691140] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:29.564 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:29.564 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:29.564 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:29.564 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:29.564 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:29.564 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:29.564 04:23:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:29.564 04:23:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57509 00:06:29.564 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57509 ']' 00:06:29.564 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57509 00:06:29.564 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:29.564 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.564 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57509 00:06:29.564 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.564 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.564 killing process with pid 57509 00:06:29.564 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57509' 00:06:29.564 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57509 00:06:29.564 04:23:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57509 00:06:32.125 00:06:32.126 real 0m4.787s 00:06:32.126 user 0m4.878s 00:06:32.126 sys 0m0.773s 00:06:32.126 04:23:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.126 04:23:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:32.126 ************************************ 00:06:32.126 END TEST exit_on_failed_rpc_init 00:06:32.126 ************************************ 00:06:32.126 04:23:28 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:32.126 00:06:32.126 real 0m24.925s 00:06:32.126 user 0m23.310s 00:06:32.126 sys 0m2.718s 00:06:32.126 04:23:28 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.126 04:23:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.126 ************************************ 00:06:32.126 END TEST skip_rpc 00:06:32.126 ************************************ 00:06:32.385 04:23:28 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:32.385 04:23:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.385 04:23:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.385 04:23:28 -- common/autotest_common.sh@10 -- # set +x 00:06:32.385 ************************************ 00:06:32.385 START TEST rpc_client 00:06:32.385 ************************************ 00:06:32.386 04:23:28 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:32.386 * Looking for test storage... 00:06:32.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:32.386 04:23:28 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:32.386 04:23:28 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:32.386 04:23:28 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:32.386 04:23:28 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.386 04:23:28 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:32.386 04:23:28 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.386 04:23:28 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:32.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.386 --rc genhtml_branch_coverage=1 00:06:32.386 --rc genhtml_function_coverage=1 00:06:32.386 --rc genhtml_legend=1 00:06:32.386 --rc geninfo_all_blocks=1 00:06:32.386 --rc geninfo_unexecuted_blocks=1 00:06:32.386 00:06:32.386 ' 00:06:32.386 04:23:28 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:32.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.386 --rc genhtml_branch_coverage=1 00:06:32.386 --rc genhtml_function_coverage=1 00:06:32.386 --rc genhtml_legend=1 00:06:32.386 --rc geninfo_all_blocks=1 00:06:32.386 --rc geninfo_unexecuted_blocks=1 00:06:32.386 00:06:32.386 ' 00:06:32.386 04:23:28 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:32.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.386 --rc genhtml_branch_coverage=1 00:06:32.386 --rc genhtml_function_coverage=1 00:06:32.386 --rc genhtml_legend=1 00:06:32.386 --rc geninfo_all_blocks=1 00:06:32.386 --rc geninfo_unexecuted_blocks=1 00:06:32.386 00:06:32.386 ' 00:06:32.386 04:23:28 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:32.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.386 --rc genhtml_branch_coverage=1 00:06:32.386 --rc genhtml_function_coverage=1 00:06:32.386 --rc genhtml_legend=1 00:06:32.386 --rc geninfo_all_blocks=1 00:06:32.386 --rc geninfo_unexecuted_blocks=1 00:06:32.386 00:06:32.386 ' 00:06:32.386 04:23:28 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:32.646 OK 00:06:32.646 04:23:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:32.646 00:06:32.646 real 0m0.286s 00:06:32.646 user 0m0.152s 00:06:32.646 sys 0m0.151s 00:06:32.646 04:23:29 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.646 04:23:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:32.646 ************************************ 00:06:32.646 END TEST rpc_client 00:06:32.646 ************************************ 00:06:32.646 04:23:29 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:32.646 04:23:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.646 04:23:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.646 04:23:29 -- common/autotest_common.sh@10 -- # set +x 00:06:32.646 ************************************ 00:06:32.646 START TEST json_config 00:06:32.646 ************************************ 00:06:32.646 04:23:29 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:32.646 04:23:29 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:32.646 04:23:29 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:32.646 04:23:29 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:32.905 04:23:29 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:32.905 04:23:29 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.905 04:23:29 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.905 04:23:29 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.905 04:23:29 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.905 04:23:29 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.905 04:23:29 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.905 04:23:29 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.905 04:23:29 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.905 04:23:29 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.905 04:23:29 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.905 04:23:29 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.905 04:23:29 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:32.905 04:23:29 json_config -- scripts/common.sh@345 -- # : 1 00:06:32.905 04:23:29 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.905 04:23:29 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.905 04:23:29 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:32.905 04:23:29 json_config -- scripts/common.sh@353 -- # local d=1 00:06:32.905 04:23:29 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.905 04:23:29 json_config -- scripts/common.sh@355 -- # echo 1 00:06:32.905 04:23:29 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.905 04:23:29 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:32.905 04:23:29 json_config -- scripts/common.sh@353 -- # local d=2 00:06:32.905 04:23:29 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.905 04:23:29 json_config -- scripts/common.sh@355 -- # echo 2 00:06:32.905 04:23:29 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.905 04:23:29 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.905 04:23:29 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.905 04:23:29 json_config -- scripts/common.sh@368 -- # return 0 00:06:32.905 04:23:29 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.905 04:23:29 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:32.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.905 --rc genhtml_branch_coverage=1 00:06:32.905 --rc genhtml_function_coverage=1 00:06:32.905 --rc genhtml_legend=1 00:06:32.905 --rc geninfo_all_blocks=1 00:06:32.905 --rc geninfo_unexecuted_blocks=1 00:06:32.905 00:06:32.905 ' 00:06:32.905 04:23:29 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:32.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.905 --rc genhtml_branch_coverage=1 00:06:32.905 --rc genhtml_function_coverage=1 00:06:32.905 --rc genhtml_legend=1 00:06:32.905 --rc geninfo_all_blocks=1 00:06:32.905 --rc geninfo_unexecuted_blocks=1 00:06:32.905 00:06:32.905 ' 00:06:32.905 04:23:29 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:32.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.905 --rc genhtml_branch_coverage=1 00:06:32.905 --rc genhtml_function_coverage=1 00:06:32.905 --rc genhtml_legend=1 00:06:32.905 --rc geninfo_all_blocks=1 00:06:32.905 --rc geninfo_unexecuted_blocks=1 00:06:32.905 00:06:32.905 ' 00:06:32.905 04:23:29 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:32.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.905 --rc genhtml_branch_coverage=1 00:06:32.905 --rc genhtml_function_coverage=1 00:06:32.905 --rc genhtml_legend=1 00:06:32.905 --rc geninfo_all_blocks=1 00:06:32.905 --rc geninfo_unexecuted_blocks=1 00:06:32.905 00:06:32.905 ' 00:06:32.905 04:23:29 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:32.905 04:23:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:32.905 04:23:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:32.905 04:23:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:32.905 04:23:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:32.905 04:23:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:32.905 04:23:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:32.905 04:23:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:32.905 04:23:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:32.905 04:23:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:32.905 04:23:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:32.905 04:23:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:32.905 04:23:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d5e0a0d0-cc2e-4042-b160-ee4b4435e5e2 00:06:32.905 04:23:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=d5e0a0d0-cc2e-4042-b160-ee4b4435e5e2 00:06:32.905 04:23:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:32.905 04:23:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:32.905 04:23:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:32.905 04:23:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:32.905 04:23:29 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:32.905 04:23:29 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:32.905 04:23:29 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.905 04:23:29 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.905 04:23:29 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.905 04:23:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.905 04:23:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.905 04:23:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.905 04:23:29 json_config -- paths/export.sh@5 -- # export PATH 00:06:32.905 04:23:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.905 04:23:29 json_config -- nvmf/common.sh@51 -- # : 0 00:06:32.905 04:23:29 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:32.905 04:23:29 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:32.905 04:23:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:32.905 04:23:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:32.905 04:23:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:32.906 04:23:29 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:32.906 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:32.906 04:23:29 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:32.906 04:23:29 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:32.906 04:23:29 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:32.906 04:23:29 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:32.906 04:23:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:32.906 04:23:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:32.906 04:23:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:32.906 04:23:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:32.906 04:23:29 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:32.906 WARNING: No tests are enabled so not running JSON configuration tests 00:06:32.906 04:23:29 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:32.906 00:06:32.906 real 0m0.225s 00:06:32.906 user 0m0.130s 00:06:32.906 sys 0m0.103s 00:06:32.906 04:23:29 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.906 04:23:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.906 ************************************ 00:06:32.906 END TEST json_config 00:06:32.906 ************************************ 00:06:32.906 04:23:29 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:32.906 04:23:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.906 04:23:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.906 04:23:29 -- common/autotest_common.sh@10 -- # set +x 00:06:32.906 ************************************ 00:06:32.906 START TEST json_config_extra_key 00:06:32.906 ************************************ 00:06:32.906 04:23:29 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:32.906 04:23:29 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:32.906 04:23:29 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:32.906 04:23:29 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:33.166 04:23:29 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.166 04:23:29 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:33.166 04:23:29 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.166 04:23:29 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:33.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.166 --rc genhtml_branch_coverage=1 00:06:33.166 --rc genhtml_function_coverage=1 00:06:33.166 --rc genhtml_legend=1 00:06:33.166 --rc geninfo_all_blocks=1 00:06:33.166 --rc geninfo_unexecuted_blocks=1 00:06:33.166 00:06:33.166 ' 00:06:33.166 04:23:29 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:33.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.166 --rc genhtml_branch_coverage=1 00:06:33.166 --rc genhtml_function_coverage=1 00:06:33.166 --rc genhtml_legend=1 00:06:33.166 --rc geninfo_all_blocks=1 00:06:33.166 --rc geninfo_unexecuted_blocks=1 00:06:33.166 00:06:33.166 ' 00:06:33.166 04:23:29 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:33.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.166 --rc genhtml_branch_coverage=1 00:06:33.166 --rc genhtml_function_coverage=1 00:06:33.166 --rc genhtml_legend=1 00:06:33.166 --rc geninfo_all_blocks=1 00:06:33.166 --rc geninfo_unexecuted_blocks=1 00:06:33.166 00:06:33.166 ' 00:06:33.166 04:23:29 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:33.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.166 --rc genhtml_branch_coverage=1 00:06:33.166 --rc genhtml_function_coverage=1 00:06:33.166 --rc genhtml_legend=1 00:06:33.166 --rc geninfo_all_blocks=1 00:06:33.166 --rc geninfo_unexecuted_blocks=1 00:06:33.166 00:06:33.166 ' 00:06:33.166 04:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:33.166 04:23:29 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:33.166 04:23:29 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.166 04:23:29 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.166 04:23:29 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.166 04:23:29 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.167 04:23:29 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.167 04:23:29 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.167 04:23:29 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.167 04:23:29 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.167 04:23:29 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.167 04:23:29 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.167 04:23:29 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d5e0a0d0-cc2e-4042-b160-ee4b4435e5e2 00:06:33.167 04:23:29 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=d5e0a0d0-cc2e-4042-b160-ee4b4435e5e2 00:06:33.167 04:23:29 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.167 04:23:29 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.167 04:23:29 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:33.167 04:23:29 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.167 04:23:29 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:33.167 04:23:29 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:33.167 04:23:29 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.167 04:23:29 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.167 04:23:29 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.167 04:23:29 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.167 04:23:29 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.167 04:23:29 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.167 04:23:29 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:33.167 04:23:29 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.167 04:23:29 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:33.167 04:23:29 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:33.167 04:23:29 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:33.167 04:23:29 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.167 04:23:29 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.167 04:23:29 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.167 04:23:29 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:33.167 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:33.167 04:23:29 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:33.167 04:23:29 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:33.167 04:23:29 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:33.167 04:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:33.167 04:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:33.167 INFO: launching applications... 00:06:33.167 04:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:33.167 04:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:33.167 04:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:33.167 04:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:33.167 04:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:33.167 04:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:33.167 04:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:33.167 04:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:33.167 04:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:33.167 04:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:33.167 04:23:29 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:33.167 04:23:29 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:33.167 04:23:29 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:33.167 04:23:29 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:33.167 04:23:29 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:33.167 04:23:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:33.167 04:23:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:33.167 04:23:29 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57739 00:06:33.167 04:23:29 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:33.167 Waiting for target to run... 00:06:33.167 04:23:29 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57739 /var/tmp/spdk_tgt.sock 00:06:33.167 04:23:29 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:33.167 04:23:29 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57739 ']' 00:06:33.167 04:23:29 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:33.167 04:23:29 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.167 04:23:29 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:33.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:33.167 04:23:29 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.167 04:23:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:33.167 [2024-11-27 04:23:29.711896] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:33.167 [2024-11-27 04:23:29.712127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57739 ] 00:06:33.737 [2024-11-27 04:23:30.101206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.737 [2024-11-27 04:23:30.221870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.678 04:23:30 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.678 04:23:30 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:34.678 04:23:30 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:34.678 00:06:34.678 04:23:30 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:34.678 INFO: shutting down applications... 00:06:34.678 04:23:30 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:34.678 04:23:30 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:34.678 04:23:30 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:34.678 04:23:30 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57739 ]] 00:06:34.678 04:23:30 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57739 00:06:34.678 04:23:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:34.678 04:23:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:34.678 04:23:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57739 00:06:34.678 04:23:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:34.953 04:23:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:34.953 04:23:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:34.953 04:23:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57739 00:06:34.953 04:23:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:35.538 04:23:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:35.538 04:23:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:35.538 04:23:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57739 00:06:35.538 04:23:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:36.108 04:23:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:36.108 04:23:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:36.108 04:23:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57739 00:06:36.108 04:23:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:36.679 04:23:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:36.679 04:23:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:36.679 04:23:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57739 00:06:36.679 04:23:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:36.939 04:23:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:36.939 04:23:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:36.939 04:23:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57739 00:06:36.939 04:23:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:37.510 04:23:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:37.510 04:23:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:37.510 04:23:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57739 00:06:37.510 04:23:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:38.084 04:23:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:38.084 04:23:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:38.084 04:23:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57739 00:06:38.084 04:23:34 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:38.084 SPDK target shutdown done 00:06:38.084 Success 00:06:38.084 04:23:34 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:38.084 04:23:34 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:38.084 04:23:34 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:38.084 04:23:34 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:38.084 00:06:38.084 real 0m5.134s 00:06:38.084 user 0m4.384s 00:06:38.084 sys 0m0.586s 00:06:38.084 04:23:34 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.084 04:23:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:38.084 ************************************ 00:06:38.084 END TEST json_config_extra_key 00:06:38.084 ************************************ 00:06:38.084 04:23:34 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:38.084 04:23:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.084 04:23:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.084 04:23:34 -- common/autotest_common.sh@10 -- # set +x 00:06:38.084 ************************************ 00:06:38.084 START TEST alias_rpc 00:06:38.084 ************************************ 00:06:38.084 04:23:34 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:38.344 * Looking for test storage... 00:06:38.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:38.344 04:23:34 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:38.344 04:23:34 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:38.344 04:23:34 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:38.344 04:23:34 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.344 04:23:34 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:38.344 04:23:34 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.344 04:23:34 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:38.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.344 --rc genhtml_branch_coverage=1 00:06:38.344 --rc genhtml_function_coverage=1 00:06:38.344 --rc genhtml_legend=1 00:06:38.344 --rc geninfo_all_blocks=1 00:06:38.344 --rc geninfo_unexecuted_blocks=1 00:06:38.344 00:06:38.344 ' 00:06:38.344 04:23:34 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:38.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.345 --rc genhtml_branch_coverage=1 00:06:38.345 --rc genhtml_function_coverage=1 00:06:38.345 --rc genhtml_legend=1 00:06:38.345 --rc geninfo_all_blocks=1 00:06:38.345 --rc geninfo_unexecuted_blocks=1 00:06:38.345 00:06:38.345 ' 00:06:38.345 04:23:34 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:38.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.345 --rc genhtml_branch_coverage=1 00:06:38.345 --rc genhtml_function_coverage=1 00:06:38.345 --rc genhtml_legend=1 00:06:38.345 --rc geninfo_all_blocks=1 00:06:38.345 --rc geninfo_unexecuted_blocks=1 00:06:38.345 00:06:38.345 ' 00:06:38.345 04:23:34 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:38.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.345 --rc genhtml_branch_coverage=1 00:06:38.345 --rc genhtml_function_coverage=1 00:06:38.345 --rc genhtml_legend=1 00:06:38.345 --rc geninfo_all_blocks=1 00:06:38.345 --rc geninfo_unexecuted_blocks=1 00:06:38.345 00:06:38.345 ' 00:06:38.345 04:23:34 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:38.345 04:23:34 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57862 00:06:38.345 04:23:34 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:38.345 04:23:34 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57862 00:06:38.345 04:23:34 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57862 ']' 00:06:38.345 04:23:34 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.345 04:23:34 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.345 04:23:34 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.345 04:23:34 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.345 04:23:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.345 [2024-11-27 04:23:34.910394] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:38.345 [2024-11-27 04:23:34.910621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57862 ] 00:06:38.604 [2024-11-27 04:23:35.082582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.864 [2024-11-27 04:23:35.223652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.803 04:23:36 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.803 04:23:36 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:39.803 04:23:36 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:40.063 04:23:36 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57862 00:06:40.063 04:23:36 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57862 ']' 00:06:40.063 04:23:36 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57862 00:06:40.063 04:23:36 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:40.063 04:23:36 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.063 04:23:36 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57862 00:06:40.063 04:23:36 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.063 04:23:36 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.063 04:23:36 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57862' 00:06:40.063 killing process with pid 57862 00:06:40.063 04:23:36 alias_rpc -- common/autotest_common.sh@973 -- # kill 57862 00:06:40.063 04:23:36 alias_rpc -- common/autotest_common.sh@978 -- # wait 57862 00:06:43.354 ************************************ 00:06:43.354 END TEST alias_rpc 00:06:43.354 ************************************ 00:06:43.354 00:06:43.354 real 0m4.600s 00:06:43.354 user 0m4.382s 00:06:43.354 sys 0m0.745s 00:06:43.354 04:23:39 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.354 04:23:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.354 04:23:39 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:43.354 04:23:39 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:43.354 04:23:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.354 04:23:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.354 04:23:39 -- common/autotest_common.sh@10 -- # set +x 00:06:43.354 ************************************ 00:06:43.354 START TEST spdkcli_tcp 00:06:43.354 ************************************ 00:06:43.354 04:23:39 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:43.354 * Looking for test storage... 00:06:43.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:43.354 04:23:39 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:43.354 04:23:39 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:43.354 04:23:39 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:43.354 04:23:39 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.354 04:23:39 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:43.354 04:23:39 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.354 04:23:39 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:43.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.354 --rc genhtml_branch_coverage=1 00:06:43.354 --rc genhtml_function_coverage=1 00:06:43.354 --rc genhtml_legend=1 00:06:43.354 --rc geninfo_all_blocks=1 00:06:43.354 --rc geninfo_unexecuted_blocks=1 00:06:43.354 00:06:43.354 ' 00:06:43.354 04:23:39 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:43.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.354 --rc genhtml_branch_coverage=1 00:06:43.354 --rc genhtml_function_coverage=1 00:06:43.354 --rc genhtml_legend=1 00:06:43.354 --rc geninfo_all_blocks=1 00:06:43.354 --rc geninfo_unexecuted_blocks=1 00:06:43.354 00:06:43.354 ' 00:06:43.354 04:23:39 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:43.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.354 --rc genhtml_branch_coverage=1 00:06:43.354 --rc genhtml_function_coverage=1 00:06:43.354 --rc genhtml_legend=1 00:06:43.354 --rc geninfo_all_blocks=1 00:06:43.354 --rc geninfo_unexecuted_blocks=1 00:06:43.354 00:06:43.354 ' 00:06:43.354 04:23:39 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:43.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.354 --rc genhtml_branch_coverage=1 00:06:43.354 --rc genhtml_function_coverage=1 00:06:43.354 --rc genhtml_legend=1 00:06:43.354 --rc geninfo_all_blocks=1 00:06:43.354 --rc geninfo_unexecuted_blocks=1 00:06:43.354 00:06:43.354 ' 00:06:43.354 04:23:39 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:43.354 04:23:39 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:43.354 04:23:39 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:43.354 04:23:39 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:43.354 04:23:39 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:43.354 04:23:39 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:43.354 04:23:39 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:43.354 04:23:39 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:43.354 04:23:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:43.355 04:23:39 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57975 00:06:43.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.355 04:23:39 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57975 00:06:43.355 04:23:39 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57975 ']' 00:06:43.355 04:23:39 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.355 04:23:39 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.355 04:23:39 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.355 04:23:39 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.355 04:23:39 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:43.355 04:23:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:43.355 [2024-11-27 04:23:39.535055] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:43.355 [2024-11-27 04:23:39.535560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57975 ] 00:06:43.355 [2024-11-27 04:23:39.711260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.355 [2024-11-27 04:23:39.855727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.355 [2024-11-27 04:23:39.855775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.735 04:23:40 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.735 04:23:40 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:44.735 04:23:40 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57992 00:06:44.735 04:23:40 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:44.735 04:23:40 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:44.735 [ 00:06:44.735 "bdev_malloc_delete", 00:06:44.735 "bdev_malloc_create", 00:06:44.735 "bdev_null_resize", 00:06:44.735 "bdev_null_delete", 00:06:44.735 "bdev_null_create", 00:06:44.735 "bdev_nvme_cuse_unregister", 00:06:44.735 "bdev_nvme_cuse_register", 00:06:44.735 "bdev_opal_new_user", 00:06:44.735 "bdev_opal_set_lock_state", 00:06:44.735 "bdev_opal_delete", 00:06:44.735 "bdev_opal_get_info", 00:06:44.735 "bdev_opal_create", 00:06:44.735 "bdev_nvme_opal_revert", 00:06:44.735 "bdev_nvme_opal_init", 00:06:44.735 "bdev_nvme_send_cmd", 00:06:44.735 "bdev_nvme_set_keys", 00:06:44.735 "bdev_nvme_get_path_iostat", 00:06:44.735 "bdev_nvme_get_mdns_discovery_info", 00:06:44.735 "bdev_nvme_stop_mdns_discovery", 00:06:44.735 "bdev_nvme_start_mdns_discovery", 00:06:44.735 "bdev_nvme_set_multipath_policy", 00:06:44.735 "bdev_nvme_set_preferred_path", 00:06:44.735 "bdev_nvme_get_io_paths", 00:06:44.735 "bdev_nvme_remove_error_injection", 00:06:44.735 "bdev_nvme_add_error_injection", 00:06:44.735 "bdev_nvme_get_discovery_info", 00:06:44.735 "bdev_nvme_stop_discovery", 00:06:44.735 "bdev_nvme_start_discovery", 00:06:44.735 "bdev_nvme_get_controller_health_info", 00:06:44.735 "bdev_nvme_disable_controller", 00:06:44.735 "bdev_nvme_enable_controller", 00:06:44.735 "bdev_nvme_reset_controller", 00:06:44.735 "bdev_nvme_get_transport_statistics", 00:06:44.735 "bdev_nvme_apply_firmware", 00:06:44.735 "bdev_nvme_detach_controller", 00:06:44.735 "bdev_nvme_get_controllers", 00:06:44.735 "bdev_nvme_attach_controller", 00:06:44.735 "bdev_nvme_set_hotplug", 00:06:44.735 "bdev_nvme_set_options", 00:06:44.735 "bdev_passthru_delete", 00:06:44.735 "bdev_passthru_create", 00:06:44.735 "bdev_lvol_set_parent_bdev", 00:06:44.735 "bdev_lvol_set_parent", 00:06:44.735 "bdev_lvol_check_shallow_copy", 00:06:44.735 "bdev_lvol_start_shallow_copy", 00:06:44.735 "bdev_lvol_grow_lvstore", 00:06:44.735 "bdev_lvol_get_lvols", 00:06:44.735 "bdev_lvol_get_lvstores", 00:06:44.735 "bdev_lvol_delete", 00:06:44.735 "bdev_lvol_set_read_only", 00:06:44.735 "bdev_lvol_resize", 00:06:44.735 "bdev_lvol_decouple_parent", 00:06:44.735 "bdev_lvol_inflate", 00:06:44.735 "bdev_lvol_rename", 00:06:44.735 "bdev_lvol_clone_bdev", 00:06:44.735 "bdev_lvol_clone", 00:06:44.735 "bdev_lvol_snapshot", 00:06:44.735 "bdev_lvol_create", 00:06:44.735 "bdev_lvol_delete_lvstore", 00:06:44.735 "bdev_lvol_rename_lvstore", 00:06:44.735 "bdev_lvol_create_lvstore", 00:06:44.735 "bdev_raid_set_options", 00:06:44.735 "bdev_raid_remove_base_bdev", 00:06:44.735 "bdev_raid_add_base_bdev", 00:06:44.735 "bdev_raid_delete", 00:06:44.735 "bdev_raid_create", 00:06:44.735 "bdev_raid_get_bdevs", 00:06:44.735 "bdev_error_inject_error", 00:06:44.735 "bdev_error_delete", 00:06:44.735 "bdev_error_create", 00:06:44.735 "bdev_split_delete", 00:06:44.735 "bdev_split_create", 00:06:44.735 "bdev_delay_delete", 00:06:44.735 "bdev_delay_create", 00:06:44.735 "bdev_delay_update_latency", 00:06:44.735 "bdev_zone_block_delete", 00:06:44.735 "bdev_zone_block_create", 00:06:44.735 "blobfs_create", 00:06:44.736 "blobfs_detect", 00:06:44.736 "blobfs_set_cache_size", 00:06:44.736 "bdev_aio_delete", 00:06:44.736 "bdev_aio_rescan", 00:06:44.736 "bdev_aio_create", 00:06:44.736 "bdev_ftl_set_property", 00:06:44.736 "bdev_ftl_get_properties", 00:06:44.736 "bdev_ftl_get_stats", 00:06:44.736 "bdev_ftl_unmap", 00:06:44.736 "bdev_ftl_unload", 00:06:44.736 "bdev_ftl_delete", 00:06:44.736 "bdev_ftl_load", 00:06:44.736 "bdev_ftl_create", 00:06:44.736 "bdev_virtio_attach_controller", 00:06:44.736 "bdev_virtio_scsi_get_devices", 00:06:44.736 "bdev_virtio_detach_controller", 00:06:44.736 "bdev_virtio_blk_set_hotplug", 00:06:44.736 "bdev_iscsi_delete", 00:06:44.736 "bdev_iscsi_create", 00:06:44.736 "bdev_iscsi_set_options", 00:06:44.736 "accel_error_inject_error", 00:06:44.736 "ioat_scan_accel_module", 00:06:44.736 "dsa_scan_accel_module", 00:06:44.736 "iaa_scan_accel_module", 00:06:44.736 "keyring_file_remove_key", 00:06:44.736 "keyring_file_add_key", 00:06:44.736 "keyring_linux_set_options", 00:06:44.736 "fsdev_aio_delete", 00:06:44.736 "fsdev_aio_create", 00:06:44.736 "iscsi_get_histogram", 00:06:44.736 "iscsi_enable_histogram", 00:06:44.736 "iscsi_set_options", 00:06:44.736 "iscsi_get_auth_groups", 00:06:44.736 "iscsi_auth_group_remove_secret", 00:06:44.736 "iscsi_auth_group_add_secret", 00:06:44.736 "iscsi_delete_auth_group", 00:06:44.736 "iscsi_create_auth_group", 00:06:44.736 "iscsi_set_discovery_auth", 00:06:44.736 "iscsi_get_options", 00:06:44.736 "iscsi_target_node_request_logout", 00:06:44.736 "iscsi_target_node_set_redirect", 00:06:44.736 "iscsi_target_node_set_auth", 00:06:44.736 "iscsi_target_node_add_lun", 00:06:44.736 "iscsi_get_stats", 00:06:44.736 "iscsi_get_connections", 00:06:44.736 "iscsi_portal_group_set_auth", 00:06:44.736 "iscsi_start_portal_group", 00:06:44.736 "iscsi_delete_portal_group", 00:06:44.736 "iscsi_create_portal_group", 00:06:44.736 "iscsi_get_portal_groups", 00:06:44.736 "iscsi_delete_target_node", 00:06:44.736 "iscsi_target_node_remove_pg_ig_maps", 00:06:44.736 "iscsi_target_node_add_pg_ig_maps", 00:06:44.736 "iscsi_create_target_node", 00:06:44.736 "iscsi_get_target_nodes", 00:06:44.736 "iscsi_delete_initiator_group", 00:06:44.736 "iscsi_initiator_group_remove_initiators", 00:06:44.736 "iscsi_initiator_group_add_initiators", 00:06:44.736 "iscsi_create_initiator_group", 00:06:44.736 "iscsi_get_initiator_groups", 00:06:44.736 "nvmf_set_crdt", 00:06:44.736 "nvmf_set_config", 00:06:44.736 "nvmf_set_max_subsystems", 00:06:44.736 "nvmf_stop_mdns_prr", 00:06:44.736 "nvmf_publish_mdns_prr", 00:06:44.736 "nvmf_subsystem_get_listeners", 00:06:44.736 "nvmf_subsystem_get_qpairs", 00:06:44.736 "nvmf_subsystem_get_controllers", 00:06:44.736 "nvmf_get_stats", 00:06:44.736 "nvmf_get_transports", 00:06:44.736 "nvmf_create_transport", 00:06:44.736 "nvmf_get_targets", 00:06:44.736 "nvmf_delete_target", 00:06:44.736 "nvmf_create_target", 00:06:44.736 "nvmf_subsystem_allow_any_host", 00:06:44.736 "nvmf_subsystem_set_keys", 00:06:44.736 "nvmf_subsystem_remove_host", 00:06:44.736 "nvmf_subsystem_add_host", 00:06:44.736 "nvmf_ns_remove_host", 00:06:44.736 "nvmf_ns_add_host", 00:06:44.736 "nvmf_subsystem_remove_ns", 00:06:44.736 "nvmf_subsystem_set_ns_ana_group", 00:06:44.736 "nvmf_subsystem_add_ns", 00:06:44.736 "nvmf_subsystem_listener_set_ana_state", 00:06:44.736 "nvmf_discovery_get_referrals", 00:06:44.736 "nvmf_discovery_remove_referral", 00:06:44.736 "nvmf_discovery_add_referral", 00:06:44.736 "nvmf_subsystem_remove_listener", 00:06:44.736 "nvmf_subsystem_add_listener", 00:06:44.736 "nvmf_delete_subsystem", 00:06:44.736 "nvmf_create_subsystem", 00:06:44.736 "nvmf_get_subsystems", 00:06:44.736 "env_dpdk_get_mem_stats", 00:06:44.736 "nbd_get_disks", 00:06:44.736 "nbd_stop_disk", 00:06:44.736 "nbd_start_disk", 00:06:44.736 "ublk_recover_disk", 00:06:44.736 "ublk_get_disks", 00:06:44.736 "ublk_stop_disk", 00:06:44.736 "ublk_start_disk", 00:06:44.736 "ublk_destroy_target", 00:06:44.736 "ublk_create_target", 00:06:44.736 "virtio_blk_create_transport", 00:06:44.736 "virtio_blk_get_transports", 00:06:44.736 "vhost_controller_set_coalescing", 00:06:44.736 "vhost_get_controllers", 00:06:44.736 "vhost_delete_controller", 00:06:44.736 "vhost_create_blk_controller", 00:06:44.736 "vhost_scsi_controller_remove_target", 00:06:44.736 "vhost_scsi_controller_add_target", 00:06:44.736 "vhost_start_scsi_controller", 00:06:44.736 "vhost_create_scsi_controller", 00:06:44.736 "thread_set_cpumask", 00:06:44.736 "scheduler_set_options", 00:06:44.736 "framework_get_governor", 00:06:44.736 "framework_get_scheduler", 00:06:44.736 "framework_set_scheduler", 00:06:44.736 "framework_get_reactors", 00:06:44.736 "thread_get_io_channels", 00:06:44.736 "thread_get_pollers", 00:06:44.736 "thread_get_stats", 00:06:44.736 "framework_monitor_context_switch", 00:06:44.736 "spdk_kill_instance", 00:06:44.736 "log_enable_timestamps", 00:06:44.736 "log_get_flags", 00:06:44.736 "log_clear_flag", 00:06:44.736 "log_set_flag", 00:06:44.736 "log_get_level", 00:06:44.736 "log_set_level", 00:06:44.736 "log_get_print_level", 00:06:44.736 "log_set_print_level", 00:06:44.736 "framework_enable_cpumask_locks", 00:06:44.736 "framework_disable_cpumask_locks", 00:06:44.736 "framework_wait_init", 00:06:44.736 "framework_start_init", 00:06:44.736 "scsi_get_devices", 00:06:44.736 "bdev_get_histogram", 00:06:44.736 "bdev_enable_histogram", 00:06:44.736 "bdev_set_qos_limit", 00:06:44.736 "bdev_set_qd_sampling_period", 00:06:44.736 "bdev_get_bdevs", 00:06:44.736 "bdev_reset_iostat", 00:06:44.736 "bdev_get_iostat", 00:06:44.736 "bdev_examine", 00:06:44.736 "bdev_wait_for_examine", 00:06:44.736 "bdev_set_options", 00:06:44.736 "accel_get_stats", 00:06:44.736 "accel_set_options", 00:06:44.736 "accel_set_driver", 00:06:44.736 "accel_crypto_key_destroy", 00:06:44.736 "accel_crypto_keys_get", 00:06:44.736 "accel_crypto_key_create", 00:06:44.736 "accel_assign_opc", 00:06:44.736 "accel_get_module_info", 00:06:44.736 "accel_get_opc_assignments", 00:06:44.736 "vmd_rescan", 00:06:44.736 "vmd_remove_device", 00:06:44.736 "vmd_enable", 00:06:44.736 "sock_get_default_impl", 00:06:44.736 "sock_set_default_impl", 00:06:44.736 "sock_impl_set_options", 00:06:44.736 "sock_impl_get_options", 00:06:44.736 "iobuf_get_stats", 00:06:44.736 "iobuf_set_options", 00:06:44.736 "keyring_get_keys", 00:06:44.736 "framework_get_pci_devices", 00:06:44.736 "framework_get_config", 00:06:44.736 "framework_get_subsystems", 00:06:44.736 "fsdev_set_opts", 00:06:44.736 "fsdev_get_opts", 00:06:44.736 "trace_get_info", 00:06:44.736 "trace_get_tpoint_group_mask", 00:06:44.736 "trace_disable_tpoint_group", 00:06:44.736 "trace_enable_tpoint_group", 00:06:44.736 "trace_clear_tpoint_mask", 00:06:44.736 "trace_set_tpoint_mask", 00:06:44.736 "notify_get_notifications", 00:06:44.736 "notify_get_types", 00:06:44.736 "spdk_get_version", 00:06:44.736 "rpc_get_methods" 00:06:44.736 ] 00:06:44.736 04:23:41 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:44.736 04:23:41 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:44.736 04:23:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.736 04:23:41 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:44.736 04:23:41 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57975 00:06:44.736 04:23:41 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57975 ']' 00:06:44.736 04:23:41 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57975 00:06:44.736 04:23:41 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:44.736 04:23:41 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.736 04:23:41 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57975 00:06:44.736 killing process with pid 57975 00:06:44.736 04:23:41 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.736 04:23:41 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.736 04:23:41 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57975' 00:06:44.736 04:23:41 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57975 00:06:44.736 04:23:41 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57975 00:06:47.308 ************************************ 00:06:47.308 END TEST spdkcli_tcp 00:06:47.308 ************************************ 00:06:47.308 00:06:47.308 real 0m4.618s 00:06:47.308 user 0m8.113s 00:06:47.308 sys 0m0.767s 00:06:47.308 04:23:43 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.308 04:23:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.568 04:23:43 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:47.568 04:23:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.568 04:23:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.568 04:23:43 -- common/autotest_common.sh@10 -- # set +x 00:06:47.568 ************************************ 00:06:47.568 START TEST dpdk_mem_utility 00:06:47.568 ************************************ 00:06:47.568 04:23:43 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:47.568 * Looking for test storage... 00:06:47.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:47.568 04:23:44 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:47.568 04:23:44 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:47.568 04:23:44 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:47.568 04:23:44 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.568 04:23:44 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:47.568 04:23:44 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.569 04:23:44 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:47.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.569 --rc genhtml_branch_coverage=1 00:06:47.569 --rc genhtml_function_coverage=1 00:06:47.569 --rc genhtml_legend=1 00:06:47.569 --rc geninfo_all_blocks=1 00:06:47.569 --rc geninfo_unexecuted_blocks=1 00:06:47.569 00:06:47.569 ' 00:06:47.569 04:23:44 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:47.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.569 --rc genhtml_branch_coverage=1 00:06:47.569 --rc genhtml_function_coverage=1 00:06:47.569 --rc genhtml_legend=1 00:06:47.569 --rc geninfo_all_blocks=1 00:06:47.569 --rc geninfo_unexecuted_blocks=1 00:06:47.569 00:06:47.569 ' 00:06:47.569 04:23:44 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:47.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.569 --rc genhtml_branch_coverage=1 00:06:47.569 --rc genhtml_function_coverage=1 00:06:47.569 --rc genhtml_legend=1 00:06:47.569 --rc geninfo_all_blocks=1 00:06:47.569 --rc geninfo_unexecuted_blocks=1 00:06:47.569 00:06:47.569 ' 00:06:47.569 04:23:44 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:47.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.569 --rc genhtml_branch_coverage=1 00:06:47.569 --rc genhtml_function_coverage=1 00:06:47.569 --rc genhtml_legend=1 00:06:47.569 --rc geninfo_all_blocks=1 00:06:47.569 --rc geninfo_unexecuted_blocks=1 00:06:47.569 00:06:47.569 ' 00:06:47.569 04:23:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:47.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.829 04:23:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58103 00:06:47.829 04:23:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:47.829 04:23:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58103 00:06:47.829 04:23:44 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58103 ']' 00:06:47.829 04:23:44 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.829 04:23:44 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.829 04:23:44 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.829 04:23:44 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.829 04:23:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:47.829 [2024-11-27 04:23:44.243932] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:47.829 [2024-11-27 04:23:44.244145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58103 ] 00:06:48.088 [2024-11-27 04:23:44.419591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.088 [2024-11-27 04:23:44.563285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.026 04:23:45 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.026 04:23:45 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:49.026 04:23:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:49.026 04:23:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:49.026 04:23:45 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.287 04:23:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:49.287 { 00:06:49.287 "filename": "/tmp/spdk_mem_dump.txt" 00:06:49.287 } 00:06:49.287 04:23:45 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.287 04:23:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:49.287 DPDK memory size 824.000000 MiB in 1 heap(s) 00:06:49.287 1 heaps totaling size 824.000000 MiB 00:06:49.287 size: 824.000000 MiB heap id: 0 00:06:49.287 end heaps---------- 00:06:49.287 9 mempools totaling size 603.782043 MiB 00:06:49.287 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:49.287 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:49.287 size: 100.555481 MiB name: bdev_io_58103 00:06:49.287 size: 50.003479 MiB name: msgpool_58103 00:06:49.287 size: 36.509338 MiB name: fsdev_io_58103 00:06:49.287 size: 21.763794 MiB name: PDU_Pool 00:06:49.287 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:49.287 size: 4.133484 MiB name: evtpool_58103 00:06:49.287 size: 0.026123 MiB name: Session_Pool 00:06:49.287 end mempools------- 00:06:49.287 6 memzones totaling size 4.142822 MiB 00:06:49.287 size: 1.000366 MiB name: RG_ring_0_58103 00:06:49.287 size: 1.000366 MiB name: RG_ring_1_58103 00:06:49.287 size: 1.000366 MiB name: RG_ring_4_58103 00:06:49.287 size: 1.000366 MiB name: RG_ring_5_58103 00:06:49.287 size: 0.125366 MiB name: RG_ring_2_58103 00:06:49.287 size: 0.015991 MiB name: RG_ring_3_58103 00:06:49.287 end memzones------- 00:06:49.287 04:23:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:49.287 heap id: 0 total size: 824.000000 MiB number of busy elements: 317 number of free elements: 18 00:06:49.287 list of free elements. size: 16.780884 MiB 00:06:49.287 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:49.287 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:49.287 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:49.287 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:49.287 element at address: 0x200019900040 with size: 0.999939 MiB 00:06:49.287 element at address: 0x200019a00000 with size: 0.999084 MiB 00:06:49.287 element at address: 0x200032600000 with size: 0.994324 MiB 00:06:49.287 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:49.287 element at address: 0x200019200000 with size: 0.959656 MiB 00:06:49.287 element at address: 0x200019d00040 with size: 0.936401 MiB 00:06:49.287 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:49.287 element at address: 0x20001b400000 with size: 0.562439 MiB 00:06:49.287 element at address: 0x200000c00000 with size: 0.489197 MiB 00:06:49.287 element at address: 0x200019600000 with size: 0.487976 MiB 00:06:49.287 element at address: 0x200019e00000 with size: 0.485413 MiB 00:06:49.287 element at address: 0x200012c00000 with size: 0.433228 MiB 00:06:49.287 element at address: 0x200028800000 with size: 0.390442 MiB 00:06:49.287 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:49.287 list of standard malloc elements. size: 199.288208 MiB 00:06:49.287 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:49.287 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:49.287 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:49.287 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:49.287 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:06:49.287 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:49.287 element at address: 0x200019deff40 with size: 0.062683 MiB 00:06:49.287 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:49.287 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:49.287 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:06:49.287 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:49.287 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:49.287 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:49.287 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:49.287 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:49.287 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:49.287 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:49.287 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:49.287 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:49.287 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:49.287 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:49.287 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:49.287 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:49.287 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:49.287 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:49.287 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:49.288 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:49.288 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:49.288 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:49.288 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:49.288 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:49.288 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:49.288 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:49.288 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:49.288 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:49.288 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:49.288 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:49.288 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:49.288 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:49.288 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:49.288 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:49.288 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:06:49.288 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:06:49.288 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:06:49.289 element at address: 0x200019affc40 with size: 0.000244 MiB 00:06:49.289 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:06:49.289 element at address: 0x200028863f40 with size: 0.000244 MiB 00:06:49.289 element at address: 0x200028864040 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20002886af80 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20002886b080 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20002886b180 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20002886b280 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20002886b380 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20002886b480 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20002886b580 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20002886b680 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20002886b780 with size: 0.000244 MiB 00:06:49.289 element at address: 0x20002886b880 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886b980 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886be80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886c080 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886c180 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886c280 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886c380 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886c480 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886c580 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886c680 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886c780 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886c880 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886c980 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886d080 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886d180 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886d280 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886d380 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886d480 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886d580 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886d680 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886d780 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886d880 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886d980 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886da80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886db80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886de80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886df80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886e080 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886e180 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886e280 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886e380 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886e480 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886e580 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886e680 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886e780 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886e880 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886e980 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886f080 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886f180 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886f280 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886f380 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886f480 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886f580 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886f680 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886f780 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886f880 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886f980 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:06:49.290 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:06:49.290 list of memzone associated elements. size: 607.930908 MiB 00:06:49.290 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:06:49.290 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:49.290 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:06:49.290 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:49.290 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:06:49.290 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58103_0 00:06:49.290 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:49.290 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58103_0 00:06:49.290 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:49.290 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58103_0 00:06:49.290 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:06:49.290 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:49.290 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:06:49.290 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:49.290 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:49.290 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58103_0 00:06:49.290 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:49.290 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58103 00:06:49.290 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:49.290 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58103 00:06:49.290 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:06:49.290 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:49.290 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:06:49.290 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:49.290 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:49.290 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:49.290 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:06:49.290 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:49.290 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:49.290 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58103 00:06:49.290 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:49.290 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58103 00:06:49.290 element at address: 0x200019affd40 with size: 1.000549 MiB 00:06:49.290 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58103 00:06:49.290 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:06:49.290 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58103 00:06:49.290 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:49.290 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58103 00:06:49.290 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:49.290 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58103 00:06:49.290 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:06:49.290 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:49.290 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:06:49.290 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:49.290 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:06:49.290 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:49.290 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:49.290 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58103 00:06:49.290 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:49.290 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58103 00:06:49.290 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:06:49.290 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:49.290 element at address: 0x200028864140 with size: 0.023804 MiB 00:06:49.290 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:49.290 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:49.290 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58103 00:06:49.291 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:06:49.291 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:49.291 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:49.291 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58103 00:06:49.291 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:49.291 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58103 00:06:49.291 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:49.291 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58103 00:06:49.291 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:06:49.291 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:49.291 04:23:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:49.291 04:23:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58103 00:06:49.291 04:23:45 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58103 ']' 00:06:49.291 04:23:45 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58103 00:06:49.291 04:23:45 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:49.291 04:23:45 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.291 04:23:45 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58103 00:06:49.291 killing process with pid 58103 00:06:49.291 04:23:45 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.291 04:23:45 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.291 04:23:45 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58103' 00:06:49.291 04:23:45 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58103 00:06:49.291 04:23:45 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58103 00:06:52.580 00:06:52.580 real 0m4.590s 00:06:52.580 user 0m4.328s 00:06:52.580 sys 0m0.733s 00:06:52.580 04:23:48 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.580 04:23:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:52.580 ************************************ 00:06:52.580 END TEST dpdk_mem_utility 00:06:52.580 ************************************ 00:06:52.580 04:23:48 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:52.580 04:23:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.580 04:23:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.580 04:23:48 -- common/autotest_common.sh@10 -- # set +x 00:06:52.580 ************************************ 00:06:52.580 START TEST event 00:06:52.580 ************************************ 00:06:52.580 04:23:48 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:52.580 * Looking for test storage... 00:06:52.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:52.580 04:23:48 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:52.580 04:23:48 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:52.580 04:23:48 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:52.580 04:23:48 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:52.580 04:23:48 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.580 04:23:48 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.580 04:23:48 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.580 04:23:48 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.580 04:23:48 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.580 04:23:48 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.580 04:23:48 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.580 04:23:48 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.580 04:23:48 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.580 04:23:48 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.580 04:23:48 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.580 04:23:48 event -- scripts/common.sh@344 -- # case "$op" in 00:06:52.580 04:23:48 event -- scripts/common.sh@345 -- # : 1 00:06:52.580 04:23:48 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.580 04:23:48 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.580 04:23:48 event -- scripts/common.sh@365 -- # decimal 1 00:06:52.580 04:23:48 event -- scripts/common.sh@353 -- # local d=1 00:06:52.581 04:23:48 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.581 04:23:48 event -- scripts/common.sh@355 -- # echo 1 00:06:52.581 04:23:48 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.581 04:23:48 event -- scripts/common.sh@366 -- # decimal 2 00:06:52.581 04:23:48 event -- scripts/common.sh@353 -- # local d=2 00:06:52.581 04:23:48 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.581 04:23:48 event -- scripts/common.sh@355 -- # echo 2 00:06:52.581 04:23:48 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.581 04:23:48 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.581 04:23:48 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.581 04:23:48 event -- scripts/common.sh@368 -- # return 0 00:06:52.581 04:23:48 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.581 04:23:48 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:52.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.581 --rc genhtml_branch_coverage=1 00:06:52.581 --rc genhtml_function_coverage=1 00:06:52.581 --rc genhtml_legend=1 00:06:52.581 --rc geninfo_all_blocks=1 00:06:52.581 --rc geninfo_unexecuted_blocks=1 00:06:52.581 00:06:52.581 ' 00:06:52.581 04:23:48 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:52.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.581 --rc genhtml_branch_coverage=1 00:06:52.581 --rc genhtml_function_coverage=1 00:06:52.581 --rc genhtml_legend=1 00:06:52.581 --rc geninfo_all_blocks=1 00:06:52.581 --rc geninfo_unexecuted_blocks=1 00:06:52.581 00:06:52.581 ' 00:06:52.581 04:23:48 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:52.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.581 --rc genhtml_branch_coverage=1 00:06:52.581 --rc genhtml_function_coverage=1 00:06:52.581 --rc genhtml_legend=1 00:06:52.581 --rc geninfo_all_blocks=1 00:06:52.581 --rc geninfo_unexecuted_blocks=1 00:06:52.581 00:06:52.581 ' 00:06:52.581 04:23:48 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:52.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.581 --rc genhtml_branch_coverage=1 00:06:52.581 --rc genhtml_function_coverage=1 00:06:52.581 --rc genhtml_legend=1 00:06:52.581 --rc geninfo_all_blocks=1 00:06:52.581 --rc geninfo_unexecuted_blocks=1 00:06:52.581 00:06:52.581 ' 00:06:52.581 04:23:48 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:52.581 04:23:48 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:52.581 04:23:48 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:52.581 04:23:48 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:52.581 04:23:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.581 04:23:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.581 ************************************ 00:06:52.581 START TEST event_perf 00:06:52.581 ************************************ 00:06:52.581 04:23:48 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:52.581 Running I/O for 1 seconds...[2024-11-27 04:23:48.867864] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:52.581 [2024-11-27 04:23:48.868012] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58211 ] 00:06:52.581 [2024-11-27 04:23:49.045313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:52.839 [2024-11-27 04:23:49.194356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.839 [2024-11-27 04:23:49.194544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.839 [2024-11-27 04:23:49.194735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.839 [2024-11-27 04:23:49.194707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.217 Running I/O for 1 seconds... 00:06:54.217 lcore 0: 96655 00:06:54.217 lcore 1: 96658 00:06:54.217 lcore 2: 96661 00:06:54.217 lcore 3: 96659 00:06:54.217 done. 00:06:54.217 00:06:54.217 real 0m1.639s 00:06:54.217 user 0m4.372s 00:06:54.217 sys 0m0.141s 00:06:54.217 04:23:50 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.217 04:23:50 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:54.217 ************************************ 00:06:54.217 END TEST event_perf 00:06:54.217 ************************************ 00:06:54.217 04:23:50 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:54.217 04:23:50 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:54.217 04:23:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.217 04:23:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.217 ************************************ 00:06:54.217 START TEST event_reactor 00:06:54.217 ************************************ 00:06:54.217 04:23:50 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:54.217 [2024-11-27 04:23:50.576806] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:54.217 [2024-11-27 04:23:50.576984] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58256 ] 00:06:54.217 [2024-11-27 04:23:50.750867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.477 [2024-11-27 04:23:50.894618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.854 test_start 00:06:55.854 oneshot 00:06:55.854 tick 100 00:06:55.854 tick 100 00:06:55.854 tick 250 00:06:55.854 tick 100 00:06:55.854 tick 100 00:06:55.854 tick 250 00:06:55.854 tick 100 00:06:55.854 tick 500 00:06:55.854 tick 100 00:06:55.854 tick 100 00:06:55.854 tick 250 00:06:55.854 tick 100 00:06:55.854 tick 100 00:06:55.854 test_end 00:06:55.854 00:06:55.854 real 0m1.611s 00:06:55.854 user 0m1.396s 00:06:55.854 sys 0m0.106s 00:06:55.854 04:23:52 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.854 ************************************ 00:06:55.854 END TEST event_reactor 00:06:55.854 ************************************ 00:06:55.854 04:23:52 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:55.854 04:23:52 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:55.854 04:23:52 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:55.854 04:23:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.854 04:23:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:55.854 ************************************ 00:06:55.854 START TEST event_reactor_perf 00:06:55.854 ************************************ 00:06:55.854 04:23:52 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:55.854 [2024-11-27 04:23:52.255680] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:55.854 [2024-11-27 04:23:52.255779] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58292 ] 00:06:55.854 [2024-11-27 04:23:52.430505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.113 [2024-11-27 04:23:52.576424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.492 test_start 00:06:57.492 test_end 00:06:57.492 Performance: 369379 events per second 00:06:57.492 00:06:57.492 real 0m1.628s 00:06:57.492 user 0m1.405s 00:06:57.492 sys 0m0.115s 00:06:57.492 ************************************ 00:06:57.492 END TEST event_reactor_perf 00:06:57.492 ************************************ 00:06:57.492 04:23:53 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.492 04:23:53 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:57.492 04:23:53 event -- event/event.sh@49 -- # uname -s 00:06:57.492 04:23:53 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:57.492 04:23:53 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:57.492 04:23:53 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.492 04:23:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.492 04:23:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:57.492 ************************************ 00:06:57.492 START TEST event_scheduler 00:06:57.492 ************************************ 00:06:57.492 04:23:53 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:57.492 * Looking for test storage... 00:06:57.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:57.492 04:23:54 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:57.492 04:23:54 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:57.492 04:23:54 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:57.752 04:23:54 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:57.752 04:23:54 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.752 04:23:54 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.752 04:23:54 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.752 04:23:54 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.752 04:23:54 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.752 04:23:54 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.752 04:23:54 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.752 04:23:54 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.752 04:23:54 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.752 04:23:54 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.752 04:23:54 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.752 04:23:54 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:57.752 04:23:54 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:57.752 04:23:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.752 04:23:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.752 04:23:54 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:57.752 04:23:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:57.752 04:23:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.752 04:23:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:57.752 04:23:54 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.752 04:23:54 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:57.752 04:23:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:57.752 04:23:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.752 04:23:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:57.752 04:23:54 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.753 04:23:54 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.753 04:23:54 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.753 04:23:54 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:57.753 04:23:54 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.753 04:23:54 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:57.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.753 --rc genhtml_branch_coverage=1 00:06:57.753 --rc genhtml_function_coverage=1 00:06:57.753 --rc genhtml_legend=1 00:06:57.753 --rc geninfo_all_blocks=1 00:06:57.753 --rc geninfo_unexecuted_blocks=1 00:06:57.753 00:06:57.753 ' 00:06:57.753 04:23:54 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:57.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.753 --rc genhtml_branch_coverage=1 00:06:57.753 --rc genhtml_function_coverage=1 00:06:57.753 --rc genhtml_legend=1 00:06:57.753 --rc geninfo_all_blocks=1 00:06:57.753 --rc geninfo_unexecuted_blocks=1 00:06:57.753 00:06:57.753 ' 00:06:57.753 04:23:54 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:57.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.753 --rc genhtml_branch_coverage=1 00:06:57.753 --rc genhtml_function_coverage=1 00:06:57.753 --rc genhtml_legend=1 00:06:57.753 --rc geninfo_all_blocks=1 00:06:57.753 --rc geninfo_unexecuted_blocks=1 00:06:57.753 00:06:57.753 ' 00:06:57.753 04:23:54 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:57.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.753 --rc genhtml_branch_coverage=1 00:06:57.753 --rc genhtml_function_coverage=1 00:06:57.753 --rc genhtml_legend=1 00:06:57.753 --rc geninfo_all_blocks=1 00:06:57.753 --rc geninfo_unexecuted_blocks=1 00:06:57.753 00:06:57.753 ' 00:06:57.753 04:23:54 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:57.753 04:23:54 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58364 00:06:57.753 04:23:54 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:57.753 04:23:54 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:57.753 04:23:54 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58364 00:06:57.753 04:23:54 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58364 ']' 00:06:57.753 04:23:54 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.753 04:23:54 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.753 04:23:54 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.753 04:23:54 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.753 04:23:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:57.753 [2024-11-27 04:23:54.211822] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:57.753 [2024-11-27 04:23:54.211999] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58364 ] 00:06:58.012 [2024-11-27 04:23:54.367735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:58.012 [2024-11-27 04:23:54.490046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.012 [2024-11-27 04:23:54.490348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.012 [2024-11-27 04:23:54.490316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.012 [2024-11-27 04:23:54.490223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.580 04:23:55 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.580 04:23:55 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:58.580 04:23:55 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:58.580 04:23:55 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.580 04:23:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:58.580 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:58.580 POWER: Cannot set governor of lcore 0 to userspace 00:06:58.580 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:58.580 POWER: Cannot set governor of lcore 0 to performance 00:06:58.580 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:58.580 POWER: Cannot set governor of lcore 0 to userspace 00:06:58.580 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:58.580 POWER: Cannot set governor of lcore 0 to userspace 00:06:58.580 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:58.580 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:58.580 POWER: Unable to set Power Management Environment for lcore 0 00:06:58.580 [2024-11-27 04:23:55.059589] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:58.580 [2024-11-27 04:23:55.059614] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:58.580 [2024-11-27 04:23:55.059626] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:58.580 [2024-11-27 04:23:55.059648] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:58.580 [2024-11-27 04:23:55.059657] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:58.580 [2024-11-27 04:23:55.059668] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:58.580 04:23:55 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.580 04:23:55 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:58.580 04:23:55 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.580 04:23:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:58.840 [2024-11-27 04:23:55.392560] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:58.840 04:23:55 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.840 04:23:55 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:58.840 04:23:55 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.840 04:23:55 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.840 04:23:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:58.840 ************************************ 00:06:58.840 START TEST scheduler_create_thread 00:06:58.840 ************************************ 00:06:58.840 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:58.840 04:23:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:58.840 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.840 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.100 2 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.100 3 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.100 4 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.100 5 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.100 6 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.100 7 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.100 8 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.100 9 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.100 10 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.100 04:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.482 04:23:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.482 04:23:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:00.482 04:23:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:00.482 04:23:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.482 04:23:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.422 04:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.422 04:23:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:01.422 04:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.422 04:23:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.989 04:23:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.989 04:23:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:01.989 04:23:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:01.989 04:23:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.989 04:23:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.925 04:23:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.925 ************************************ 00:07:02.925 END TEST scheduler_create_thread 00:07:02.925 ************************************ 00:07:02.925 00:07:02.925 real 0m3.889s 00:07:02.925 user 0m0.026s 00:07:02.925 sys 0m0.012s 00:07:02.925 04:23:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.925 04:23:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.925 04:23:59 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:02.925 04:23:59 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58364 00:07:02.925 04:23:59 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58364 ']' 00:07:02.925 04:23:59 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58364 00:07:02.925 04:23:59 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:02.925 04:23:59 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.925 04:23:59 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58364 00:07:02.925 killing process with pid 58364 00:07:02.925 04:23:59 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:02.925 04:23:59 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:02.925 04:23:59 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58364' 00:07:02.925 04:23:59 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58364 00:07:02.925 04:23:59 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58364 00:07:03.184 [2024-11-27 04:23:59.674387] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:04.566 00:07:04.566 real 0m6.923s 00:07:04.566 user 0m14.371s 00:07:04.566 sys 0m0.496s 00:07:04.566 ************************************ 00:07:04.566 END TEST event_scheduler 00:07:04.566 ************************************ 00:07:04.566 04:24:00 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.566 04:24:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:04.566 04:24:00 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:04.566 04:24:00 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:04.566 04:24:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.566 04:24:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.566 04:24:00 event -- common/autotest_common.sh@10 -- # set +x 00:07:04.566 ************************************ 00:07:04.566 START TEST app_repeat 00:07:04.566 ************************************ 00:07:04.566 04:24:00 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:04.566 04:24:00 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.566 04:24:00 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.566 04:24:00 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:04.566 04:24:00 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.566 04:24:00 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:04.566 04:24:00 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:04.566 04:24:00 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:04.566 04:24:00 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58491 00:07:04.566 04:24:00 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:04.566 04:24:00 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:04.566 04:24:00 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58491' 00:07:04.566 Process app_repeat pid: 58491 00:07:04.566 04:24:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:04.566 04:24:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:04.566 spdk_app_start Round 0 00:07:04.566 04:24:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58491 /var/tmp/spdk-nbd.sock 00:07:04.566 04:24:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58491 ']' 00:07:04.566 04:24:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:04.566 04:24:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:04.566 04:24:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:04.566 04:24:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.566 04:24:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:04.566 [2024-11-27 04:24:00.975001] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:04.566 [2024-11-27 04:24:00.975128] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58491 ] 00:07:04.566 [2024-11-27 04:24:01.129448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:04.826 [2024-11-27 04:24:01.272681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.826 [2024-11-27 04:24:01.272725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.392 04:24:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.392 04:24:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:05.392 04:24:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:05.651 Malloc0 00:07:05.651 04:24:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:05.910 Malloc1 00:07:05.910 04:24:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:05.910 04:24:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.910 04:24:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:05.910 04:24:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:05.910 04:24:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.910 04:24:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:05.910 04:24:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:05.910 04:24:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.910 04:24:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:05.910 04:24:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:05.910 04:24:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.910 04:24:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:05.910 04:24:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:05.910 04:24:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:05.910 04:24:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.910 04:24:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:06.168 /dev/nbd0 00:07:06.168 04:24:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:06.168 04:24:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:06.168 04:24:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:06.168 04:24:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:06.168 04:24:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.168 04:24:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.168 04:24:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:06.168 04:24:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:06.168 04:24:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.168 04:24:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.168 04:24:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:06.168 1+0 records in 00:07:06.168 1+0 records out 00:07:06.168 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290341 s, 14.1 MB/s 00:07:06.168 04:24:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:06.168 04:24:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:06.168 04:24:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:06.168 04:24:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.168 04:24:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:06.168 04:24:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.168 04:24:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:06.168 04:24:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:06.427 /dev/nbd1 00:07:06.427 04:24:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:06.427 04:24:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:06.427 04:24:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:06.427 04:24:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:06.427 04:24:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.427 04:24:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.427 04:24:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:06.427 04:24:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:06.427 04:24:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.427 04:24:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.427 04:24:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:06.427 1+0 records in 00:07:06.427 1+0 records out 00:07:06.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354993 s, 11.5 MB/s 00:07:06.427 04:24:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:06.427 04:24:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:06.427 04:24:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:06.427 04:24:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.427 04:24:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:06.427 04:24:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.427 04:24:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:06.427 04:24:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.427 04:24:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.427 04:24:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:06.687 { 00:07:06.687 "nbd_device": "/dev/nbd0", 00:07:06.687 "bdev_name": "Malloc0" 00:07:06.687 }, 00:07:06.687 { 00:07:06.687 "nbd_device": "/dev/nbd1", 00:07:06.687 "bdev_name": "Malloc1" 00:07:06.687 } 00:07:06.687 ]' 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:06.687 { 00:07:06.687 "nbd_device": "/dev/nbd0", 00:07:06.687 "bdev_name": "Malloc0" 00:07:06.687 }, 00:07:06.687 { 00:07:06.687 "nbd_device": "/dev/nbd1", 00:07:06.687 "bdev_name": "Malloc1" 00:07:06.687 } 00:07:06.687 ]' 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:06.687 /dev/nbd1' 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:06.687 /dev/nbd1' 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:06.687 256+0 records in 00:07:06.687 256+0 records out 00:07:06.687 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128169 s, 81.8 MB/s 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:06.687 256+0 records in 00:07:06.687 256+0 records out 00:07:06.687 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233602 s, 44.9 MB/s 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:06.687 256+0 records in 00:07:06.687 256+0 records out 00:07:06.687 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245328 s, 42.7 MB/s 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:06.687 04:24:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:06.947 04:24:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:06.947 04:24:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.947 04:24:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.947 04:24:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:06.947 04:24:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:06.947 04:24:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.947 04:24:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.947 04:24:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.947 04:24:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.947 04:24:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.947 04:24:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.947 04:24:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.947 04:24:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.947 04:24:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:06.947 04:24:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.947 04:24:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.947 04:24:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:07.207 04:24:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:07.207 04:24:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:07.208 04:24:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:07.208 04:24:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.208 04:24:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.208 04:24:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:07.208 04:24:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:07.208 04:24:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.208 04:24:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.208 04:24:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.208 04:24:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.513 04:24:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:07.513 04:24:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:07.513 04:24:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.513 04:24:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:07.513 04:24:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:07.513 04:24:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.513 04:24:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:07.513 04:24:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:07.513 04:24:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:07.513 04:24:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:07.513 04:24:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:07.513 04:24:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:07.513 04:24:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:07.799 04:24:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:09.181 [2024-11-27 04:24:05.584217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:09.181 [2024-11-27 04:24:05.696788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.181 [2024-11-27 04:24:05.696791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.440 [2024-11-27 04:24:05.889876] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:09.440 [2024-11-27 04:24:05.890101] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:10.821 04:24:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:10.821 04:24:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:10.821 spdk_app_start Round 1 00:07:10.821 04:24:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58491 /var/tmp/spdk-nbd.sock 00:07:10.821 04:24:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58491 ']' 00:07:10.821 04:24:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:10.821 04:24:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.821 04:24:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:10.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:10.821 04:24:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.821 04:24:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:11.080 04:24:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.080 04:24:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:11.080 04:24:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:11.340 Malloc0 00:07:11.340 04:24:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:11.600 Malloc1 00:07:11.600 04:24:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:11.600 04:24:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.600 04:24:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.600 04:24:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:11.600 04:24:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.600 04:24:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:11.600 04:24:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:11.600 04:24:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.600 04:24:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.600 04:24:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:11.600 04:24:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.600 04:24:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:11.600 04:24:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:11.600 04:24:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:11.600 04:24:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.600 04:24:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:11.860 /dev/nbd0 00:07:11.860 04:24:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:11.860 04:24:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:11.860 04:24:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:11.860 04:24:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:11.860 04:24:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.860 04:24:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.860 04:24:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:11.860 04:24:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:11.860 04:24:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.860 04:24:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.860 04:24:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:11.860 1+0 records in 00:07:11.860 1+0 records out 00:07:11.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244853 s, 16.7 MB/s 00:07:11.860 04:24:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:11.860 04:24:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:11.860 04:24:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:11.860 04:24:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.860 04:24:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:11.860 04:24:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.860 04:24:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.860 04:24:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:12.121 /dev/nbd1 00:07:12.121 04:24:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:12.121 04:24:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:12.121 04:24:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:12.121 04:24:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:12.121 04:24:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.121 04:24:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.121 04:24:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:12.121 04:24:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:12.121 04:24:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.121 04:24:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.121 04:24:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:12.121 1+0 records in 00:07:12.121 1+0 records out 00:07:12.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403516 s, 10.2 MB/s 00:07:12.121 04:24:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:12.121 04:24:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:12.121 04:24:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:12.121 04:24:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.121 04:24:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:12.121 04:24:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.121 04:24:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.121 04:24:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.121 04:24:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.121 04:24:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.380 04:24:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:12.381 { 00:07:12.381 "nbd_device": "/dev/nbd0", 00:07:12.381 "bdev_name": "Malloc0" 00:07:12.381 }, 00:07:12.381 { 00:07:12.381 "nbd_device": "/dev/nbd1", 00:07:12.381 "bdev_name": "Malloc1" 00:07:12.381 } 00:07:12.381 ]' 00:07:12.381 04:24:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:12.381 { 00:07:12.381 "nbd_device": "/dev/nbd0", 00:07:12.381 "bdev_name": "Malloc0" 00:07:12.381 }, 00:07:12.381 { 00:07:12.381 "nbd_device": "/dev/nbd1", 00:07:12.381 "bdev_name": "Malloc1" 00:07:12.381 } 00:07:12.381 ]' 00:07:12.381 04:24:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.641 04:24:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:12.641 /dev/nbd1' 00:07:12.641 04:24:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:12.641 /dev/nbd1' 00:07:12.641 04:24:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.641 04:24:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:12.641 04:24:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:12.641 04:24:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:12.641 04:24:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:12.641 04:24:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:12.641 04:24:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.641 04:24:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.641 04:24:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:12.641 04:24:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.641 04:24:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:12.641 04:24:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:12.641 256+0 records in 00:07:12.641 256+0 records out 00:07:12.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00666843 s, 157 MB/s 00:07:12.641 04:24:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.641 04:24:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:12.641 256+0 records in 00:07:12.641 256+0 records out 00:07:12.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261956 s, 40.0 MB/s 00:07:12.641 04:24:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.641 04:24:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:12.641 256+0 records in 00:07:12.641 256+0 records out 00:07:12.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275207 s, 38.1 MB/s 00:07:12.641 04:24:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:12.641 04:24:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.641 04:24:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.641 04:24:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:12.641 04:24:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.641 04:24:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:12.641 04:24:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:12.641 04:24:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.641 04:24:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:12.641 04:24:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.641 04:24:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:12.641 04:24:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.641 04:24:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:12.641 04:24:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.641 04:24:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.641 04:24:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:12.641 04:24:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:12.641 04:24:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.641 04:24:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:12.901 04:24:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:12.901 04:24:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:12.901 04:24:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:12.901 04:24:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.901 04:24:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.901 04:24:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:12.901 04:24:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:12.901 04:24:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.901 04:24:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.901 04:24:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:13.160 04:24:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:13.160 04:24:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:13.160 04:24:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:13.160 04:24:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.160 04:24:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.160 04:24:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:13.160 04:24:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:13.160 04:24:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.160 04:24:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.160 04:24:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.160 04:24:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.420 04:24:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:13.420 04:24:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:13.420 04:24:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.420 04:24:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:13.420 04:24:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:13.420 04:24:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.420 04:24:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:13.420 04:24:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:13.420 04:24:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:13.420 04:24:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:13.420 04:24:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:13.420 04:24:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:13.420 04:24:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:14.017 04:24:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:15.399 [2024-11-27 04:24:11.628045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:15.399 [2024-11-27 04:24:11.741578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.399 [2024-11-27 04:24:11.741604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.399 [2024-11-27 04:24:11.937089] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:15.399 [2024-11-27 04:24:11.937180] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:16.781 spdk_app_start Round 2 00:07:16.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:16.781 04:24:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:16.781 04:24:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:16.781 04:24:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58491 /var/tmp/spdk-nbd.sock 00:07:16.781 04:24:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58491 ']' 00:07:16.781 04:24:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:16.781 04:24:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.781 04:24:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:16.781 04:24:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.781 04:24:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:17.041 04:24:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.041 04:24:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:17.041 04:24:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:17.300 Malloc0 00:07:17.300 04:24:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:17.559 Malloc1 00:07:17.559 04:24:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.559 04:24:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.559 04:24:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.559 04:24:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:17.559 04:24:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.559 04:24:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:17.559 04:24:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.559 04:24:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.559 04:24:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.559 04:24:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:17.559 04:24:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.559 04:24:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:17.559 04:24:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:17.559 04:24:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:17.559 04:24:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.559 04:24:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:17.818 /dev/nbd0 00:07:17.818 04:24:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:17.818 04:24:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:17.818 04:24:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:17.818 04:24:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:17.818 04:24:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:17.818 04:24:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:17.818 04:24:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:17.818 04:24:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:17.818 04:24:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:17.818 04:24:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:17.818 04:24:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:17.818 1+0 records in 00:07:17.818 1+0 records out 00:07:17.818 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463766 s, 8.8 MB/s 00:07:17.818 04:24:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.818 04:24:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:17.818 04:24:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.818 04:24:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:17.818 04:24:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:17.818 04:24:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.818 04:24:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.818 04:24:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:18.077 /dev/nbd1 00:07:18.077 04:24:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:18.077 04:24:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:18.077 04:24:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:18.077 04:24:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:18.077 04:24:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.077 04:24:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.077 04:24:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:18.077 04:24:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:18.077 04:24:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.077 04:24:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.077 04:24:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:18.077 1+0 records in 00:07:18.077 1+0 records out 00:07:18.077 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395473 s, 10.4 MB/s 00:07:18.077 04:24:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:18.077 04:24:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:18.077 04:24:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:18.077 04:24:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.077 04:24:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:18.077 04:24:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.077 04:24:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:18.077 04:24:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.077 04:24:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.077 04:24:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:18.337 { 00:07:18.337 "nbd_device": "/dev/nbd0", 00:07:18.337 "bdev_name": "Malloc0" 00:07:18.337 }, 00:07:18.337 { 00:07:18.337 "nbd_device": "/dev/nbd1", 00:07:18.337 "bdev_name": "Malloc1" 00:07:18.337 } 00:07:18.337 ]' 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:18.337 { 00:07:18.337 "nbd_device": "/dev/nbd0", 00:07:18.337 "bdev_name": "Malloc0" 00:07:18.337 }, 00:07:18.337 { 00:07:18.337 "nbd_device": "/dev/nbd1", 00:07:18.337 "bdev_name": "Malloc1" 00:07:18.337 } 00:07:18.337 ]' 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:18.337 /dev/nbd1' 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:18.337 /dev/nbd1' 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:18.337 256+0 records in 00:07:18.337 256+0 records out 00:07:18.337 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134399 s, 78.0 MB/s 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:18.337 256+0 records in 00:07:18.337 256+0 records out 00:07:18.337 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260156 s, 40.3 MB/s 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:18.337 256+0 records in 00:07:18.337 256+0 records out 00:07:18.337 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.032087 s, 32.7 MB/s 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.337 04:24:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:18.597 04:24:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.597 04:24:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:18.597 04:24:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.597 04:24:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:18.597 04:24:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.597 04:24:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.597 04:24:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:18.597 04:24:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:18.597 04:24:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.597 04:24:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:18.597 04:24:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:18.597 04:24:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:18.597 04:24:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:18.597 04:24:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.597 04:24:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.597 04:24:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:18.597 04:24:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:18.597 04:24:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.597 04:24:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.597 04:24:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:18.856 04:24:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:18.856 04:24:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:18.856 04:24:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:18.856 04:24:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.856 04:24:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.856 04:24:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:18.856 04:24:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:18.856 04:24:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.856 04:24:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.856 04:24:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.856 04:24:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:19.115 04:24:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:19.115 04:24:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:19.115 04:24:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:19.115 04:24:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:19.115 04:24:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:19.115 04:24:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:19.115 04:24:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:19.115 04:24:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:19.115 04:24:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:19.115 04:24:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:19.115 04:24:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:19.115 04:24:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:19.115 04:24:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:19.683 04:24:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:21.088 [2024-11-27 04:24:17.509848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:21.347 [2024-11-27 04:24:17.675208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.347 [2024-11-27 04:24:17.675212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.607 [2024-11-27 04:24:17.936393] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:21.607 [2024-11-27 04:24:17.936535] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:22.547 04:24:19 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58491 /var/tmp/spdk-nbd.sock 00:07:22.548 04:24:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58491 ']' 00:07:22.548 04:24:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:22.548 04:24:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:22.548 04:24:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:22.548 04:24:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.548 04:24:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:22.807 04:24:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.807 04:24:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:22.807 04:24:19 event.app_repeat -- event/event.sh@39 -- # killprocess 58491 00:07:22.807 04:24:19 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58491 ']' 00:07:22.807 04:24:19 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58491 00:07:22.807 04:24:19 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:22.807 04:24:19 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.807 04:24:19 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58491 00:07:22.807 killing process with pid 58491 00:07:22.807 04:24:19 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.807 04:24:19 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.807 04:24:19 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58491' 00:07:22.807 04:24:19 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58491 00:07:22.807 04:24:19 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58491 00:07:24.186 spdk_app_start is called in Round 0. 00:07:24.186 Shutdown signal received, stop current app iteration 00:07:24.186 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 reinitialization... 00:07:24.186 spdk_app_start is called in Round 1. 00:07:24.186 Shutdown signal received, stop current app iteration 00:07:24.186 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 reinitialization... 00:07:24.186 spdk_app_start is called in Round 2. 00:07:24.186 Shutdown signal received, stop current app iteration 00:07:24.186 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 reinitialization... 00:07:24.186 spdk_app_start is called in Round 3. 00:07:24.186 Shutdown signal received, stop current app iteration 00:07:24.186 04:24:20 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:24.186 04:24:20 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:24.186 00:07:24.186 real 0m19.496s 00:07:24.186 user 0m41.388s 00:07:24.186 sys 0m2.912s 00:07:24.186 ************************************ 00:07:24.186 END TEST app_repeat 00:07:24.186 ************************************ 00:07:24.186 04:24:20 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.186 04:24:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:24.186 04:24:20 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:24.186 04:24:20 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:24.186 04:24:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.186 04:24:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.186 04:24:20 event -- common/autotest_common.sh@10 -- # set +x 00:07:24.186 ************************************ 00:07:24.186 START TEST cpu_locks 00:07:24.186 ************************************ 00:07:24.186 04:24:20 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:24.186 * Looking for test storage... 00:07:24.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:24.186 04:24:20 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:24.186 04:24:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:24.186 04:24:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:24.186 04:24:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:24.186 04:24:20 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.186 04:24:20 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.186 04:24:20 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.186 04:24:20 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.186 04:24:20 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.186 04:24:20 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.186 04:24:20 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.186 04:24:20 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.186 04:24:20 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.186 04:24:20 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.186 04:24:20 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.187 04:24:20 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:24.187 04:24:20 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:24.187 04:24:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.187 04:24:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.187 04:24:20 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:24.187 04:24:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:24.187 04:24:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.187 04:24:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:24.187 04:24:20 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.187 04:24:20 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:24.187 04:24:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:24.187 04:24:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.187 04:24:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:24.187 04:24:20 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.187 04:24:20 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:24.187 04:24:20 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:24.187 04:24:20 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:24.187 04:24:20 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.187 04:24:20 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:24.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.187 --rc genhtml_branch_coverage=1 00:07:24.187 --rc genhtml_function_coverage=1 00:07:24.187 --rc genhtml_legend=1 00:07:24.187 --rc geninfo_all_blocks=1 00:07:24.187 --rc geninfo_unexecuted_blocks=1 00:07:24.187 00:07:24.187 ' 00:07:24.187 04:24:20 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:24.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.187 --rc genhtml_branch_coverage=1 00:07:24.187 --rc genhtml_function_coverage=1 00:07:24.187 --rc genhtml_legend=1 00:07:24.187 --rc geninfo_all_blocks=1 00:07:24.187 --rc geninfo_unexecuted_blocks=1 00:07:24.187 00:07:24.187 ' 00:07:24.187 04:24:20 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:24.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.187 --rc genhtml_branch_coverage=1 00:07:24.187 --rc genhtml_function_coverage=1 00:07:24.187 --rc genhtml_legend=1 00:07:24.187 --rc geninfo_all_blocks=1 00:07:24.187 --rc geninfo_unexecuted_blocks=1 00:07:24.187 00:07:24.187 ' 00:07:24.187 04:24:20 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:24.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.187 --rc genhtml_branch_coverage=1 00:07:24.187 --rc genhtml_function_coverage=1 00:07:24.187 --rc genhtml_legend=1 00:07:24.187 --rc geninfo_all_blocks=1 00:07:24.187 --rc geninfo_unexecuted_blocks=1 00:07:24.187 00:07:24.187 ' 00:07:24.187 04:24:20 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:24.187 04:24:20 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:24.187 04:24:20 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:24.187 04:24:20 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:24.187 04:24:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.187 04:24:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.187 04:24:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.187 ************************************ 00:07:24.187 START TEST default_locks 00:07:24.187 ************************************ 00:07:24.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.187 04:24:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:24.187 04:24:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58938 00:07:24.187 04:24:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:24.187 04:24:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58938 00:07:24.187 04:24:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58938 ']' 00:07:24.187 04:24:20 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.187 04:24:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.187 04:24:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.187 04:24:20 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.187 04:24:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.187 [2024-11-27 04:24:20.763071] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:24.187 [2024-11-27 04:24:20.763218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58938 ] 00:07:24.446 [2024-11-27 04:24:20.939042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.705 [2024-11-27 04:24:21.053740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.644 04:24:21 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.644 04:24:21 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:25.644 04:24:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58938 00:07:25.644 04:24:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58938 00:07:25.644 04:24:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:25.904 04:24:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58938 00:07:25.904 04:24:22 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58938 ']' 00:07:25.904 04:24:22 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58938 00:07:25.904 04:24:22 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:25.904 04:24:22 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.904 04:24:22 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58938 00:07:25.904 killing process with pid 58938 00:07:25.904 04:24:22 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.904 04:24:22 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.904 04:24:22 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58938' 00:07:25.904 04:24:22 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58938 00:07:25.904 04:24:22 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58938 00:07:28.443 04:24:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58938 00:07:28.443 04:24:25 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:28.443 04:24:25 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58938 00:07:28.443 04:24:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:28.443 04:24:25 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.443 04:24:25 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:28.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.443 ERROR: process (pid: 58938) is no longer running 00:07:28.443 04:24:25 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.443 04:24:25 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58938 00:07:28.443 04:24:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58938 ']' 00:07:28.443 04:24:25 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.443 04:24:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.443 04:24:25 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.443 04:24:25 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.443 04:24:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.443 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58938) - No such process 00:07:28.443 04:24:25 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.443 04:24:25 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:28.443 04:24:25 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:28.443 04:24:25 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:28.443 04:24:25 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:28.443 04:24:25 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:28.443 04:24:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:28.443 04:24:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:28.443 04:24:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:28.443 04:24:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:28.443 00:07:28.443 real 0m4.342s 00:07:28.443 user 0m4.281s 00:07:28.443 sys 0m0.640s 00:07:28.443 ************************************ 00:07:28.443 END TEST default_locks 00:07:28.443 ************************************ 00:07:28.443 04:24:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.443 04:24:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.703 04:24:25 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:28.703 04:24:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.703 04:24:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.703 04:24:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.703 ************************************ 00:07:28.703 START TEST default_locks_via_rpc 00:07:28.703 ************************************ 00:07:28.703 04:24:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:28.703 04:24:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59015 00:07:28.703 04:24:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:28.703 04:24:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59015 00:07:28.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.703 04:24:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59015 ']' 00:07:28.703 04:24:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.703 04:24:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.703 04:24:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.703 04:24:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.703 04:24:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.703 [2024-11-27 04:24:25.178802] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:28.703 [2024-11-27 04:24:25.178936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59015 ] 00:07:28.963 [2024-11-27 04:24:25.352517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.963 [2024-11-27 04:24:25.470486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.903 04:24:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.903 04:24:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:29.903 04:24:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:29.903 04:24:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.903 04:24:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.903 04:24:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.903 04:24:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:29.903 04:24:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:29.903 04:24:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:29.903 04:24:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:29.903 04:24:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:29.903 04:24:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.903 04:24:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.903 04:24:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.903 04:24:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59015 00:07:29.903 04:24:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59015 00:07:29.903 04:24:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:30.163 04:24:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59015 00:07:30.163 04:24:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59015 ']' 00:07:30.163 04:24:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59015 00:07:30.163 04:24:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:30.163 04:24:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.163 04:24:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59015 00:07:30.163 killing process with pid 59015 00:07:30.163 04:24:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.163 04:24:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.163 04:24:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59015' 00:07:30.163 04:24:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59015 00:07:30.163 04:24:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59015 00:07:32.727 00:07:32.727 real 0m3.975s 00:07:32.727 user 0m4.000s 00:07:32.727 sys 0m0.616s 00:07:32.727 04:24:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.727 04:24:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.727 ************************************ 00:07:32.727 END TEST default_locks_via_rpc 00:07:32.727 ************************************ 00:07:32.727 04:24:29 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:32.727 04:24:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.727 04:24:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.727 04:24:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:32.727 ************************************ 00:07:32.727 START TEST non_locking_app_on_locked_coremask 00:07:32.727 ************************************ 00:07:32.727 04:24:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:32.727 04:24:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59089 00:07:32.727 04:24:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:32.727 04:24:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59089 /var/tmp/spdk.sock 00:07:32.727 04:24:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59089 ']' 00:07:32.728 04:24:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.728 04:24:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.728 04:24:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.728 04:24:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.728 04:24:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.728 [2024-11-27 04:24:29.214319] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:32.728 [2024-11-27 04:24:29.214446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59089 ] 00:07:32.987 [2024-11-27 04:24:29.389598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.987 [2024-11-27 04:24:29.500006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.976 04:24:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.976 04:24:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:33.976 04:24:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59105 00:07:33.976 04:24:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59105 /var/tmp/spdk2.sock 00:07:33.976 04:24:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:33.976 04:24:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59105 ']' 00:07:33.976 04:24:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:33.976 04:24:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.976 04:24:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:33.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:33.976 04:24:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.976 04:24:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.976 [2024-11-27 04:24:30.468250] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:33.976 [2024-11-27 04:24:30.468477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59105 ] 00:07:34.236 [2024-11-27 04:24:30.641957] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:34.236 [2024-11-27 04:24:30.642031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.494 [2024-11-27 04:24:30.873988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.034 04:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.034 04:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:37.034 04:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59089 00:07:37.034 04:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59089 00:07:37.034 04:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:37.035 04:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59089 00:07:37.035 04:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59089 ']' 00:07:37.035 04:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59089 00:07:37.035 04:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:37.035 04:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.035 04:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59089 00:07:37.035 killing process with pid 59089 00:07:37.035 04:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.035 04:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.035 04:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59089' 00:07:37.035 04:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59089 00:07:37.035 04:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59089 00:07:43.608 04:24:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59105 00:07:43.608 04:24:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59105 ']' 00:07:43.608 04:24:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59105 00:07:43.608 04:24:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:43.608 04:24:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.608 04:24:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59105 00:07:43.608 killing process with pid 59105 00:07:43.608 04:24:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.608 04:24:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.608 04:24:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59105' 00:07:43.608 04:24:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59105 00:07:43.608 04:24:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59105 00:07:45.517 00:07:45.517 real 0m12.667s 00:07:45.517 user 0m12.747s 00:07:45.517 sys 0m1.306s 00:07:45.517 04:24:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.517 04:24:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:45.517 ************************************ 00:07:45.517 END TEST non_locking_app_on_locked_coremask 00:07:45.518 ************************************ 00:07:45.518 04:24:41 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:45.518 04:24:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.518 04:24:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.518 04:24:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:45.518 ************************************ 00:07:45.518 START TEST locking_app_on_unlocked_coremask 00:07:45.518 ************************************ 00:07:45.518 04:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:45.518 04:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59273 00:07:45.518 04:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:45.518 04:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59273 /var/tmp/spdk.sock 00:07:45.518 04:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59273 ']' 00:07:45.518 04:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.518 04:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.518 04:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.518 04:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.518 04:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:45.518 [2024-11-27 04:24:41.940040] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:45.518 [2024-11-27 04:24:41.940307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59273 ] 00:07:45.778 [2024-11-27 04:24:42.116361] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:45.778 [2024-11-27 04:24:42.116525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.778 [2024-11-27 04:24:42.257828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.716 04:24:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.716 04:24:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:46.716 04:24:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59289 00:07:46.716 04:24:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59289 /var/tmp/spdk2.sock 00:07:46.716 04:24:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:46.716 04:24:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59289 ']' 00:07:46.716 04:24:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:46.977 04:24:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.977 04:24:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:46.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:46.977 04:24:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.977 04:24:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:46.977 [2024-11-27 04:24:43.402010] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:46.977 [2024-11-27 04:24:43.402231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59289 ] 00:07:47.238 [2024-11-27 04:24:43.568533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.238 [2024-11-27 04:24:43.798838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.782 04:24:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.782 04:24:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:49.782 04:24:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59289 00:07:49.783 04:24:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59289 00:07:49.783 04:24:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:50.042 04:24:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59273 00:07:50.042 04:24:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59273 ']' 00:07:50.042 04:24:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59273 00:07:50.042 04:24:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:50.042 04:24:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.042 04:24:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59273 00:07:50.042 killing process with pid 59273 00:07:50.042 04:24:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:50.042 04:24:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:50.042 04:24:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59273' 00:07:50.042 04:24:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59273 00:07:50.042 04:24:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59273 00:07:55.331 04:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59289 00:07:55.331 04:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59289 ']' 00:07:55.331 04:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59289 00:07:55.331 04:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:55.331 04:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.331 04:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59289 00:07:55.331 killing process with pid 59289 00:07:55.331 04:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:55.331 04:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:55.331 04:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59289' 00:07:55.331 04:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59289 00:07:55.331 04:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59289 00:07:57.239 ************************************ 00:07:57.239 END TEST locking_app_on_unlocked_coremask 00:07:57.239 ************************************ 00:07:57.239 00:07:57.239 real 0m11.806s 00:07:57.239 user 0m11.913s 00:07:57.239 sys 0m1.406s 00:07:57.239 04:24:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.239 04:24:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:57.239 04:24:53 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:57.239 04:24:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.239 04:24:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.239 04:24:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:57.239 ************************************ 00:07:57.239 START TEST locking_app_on_locked_coremask 00:07:57.239 ************************************ 00:07:57.239 04:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:57.239 04:24:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59440 00:07:57.239 04:24:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:57.239 04:24:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59440 /var/tmp/spdk.sock 00:07:57.239 04:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59440 ']' 00:07:57.239 04:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.239 04:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.239 04:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.239 04:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.239 04:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:57.239 [2024-11-27 04:24:53.807058] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:57.239 [2024-11-27 04:24:53.807291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59440 ] 00:07:57.499 [2024-11-27 04:24:53.968621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.759 [2024-11-27 04:24:54.084643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.720 04:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.720 04:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:58.720 04:24:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59456 00:07:58.720 04:24:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:58.720 04:24:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59456 /var/tmp/spdk2.sock 00:07:58.720 04:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:58.720 04:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59456 /var/tmp/spdk2.sock 00:07:58.720 04:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:58.720 04:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.720 04:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:58.720 04:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.720 04:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59456 /var/tmp/spdk2.sock 00:07:58.720 04:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59456 ']' 00:07:58.720 04:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:58.720 04:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.720 04:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:58.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:58.720 04:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.720 04:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:58.720 [2024-11-27 04:24:55.065993] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:58.720 [2024-11-27 04:24:55.066537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59456 ] 00:07:58.720 [2024-11-27 04:24:55.236113] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59440 has claimed it. 00:07:58.720 [2024-11-27 04:24:55.236208] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:59.291 ERROR: process (pid: 59456) is no longer running 00:07:59.291 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59456) - No such process 00:07:59.291 04:24:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.291 04:24:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:59.291 04:24:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:59.291 04:24:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:59.291 04:24:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:59.291 04:24:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:59.291 04:24:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59440 00:07:59.291 04:24:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59440 00:07:59.291 04:24:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:59.551 04:24:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59440 00:07:59.551 04:24:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59440 ']' 00:07:59.551 04:24:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59440 00:07:59.551 04:24:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:59.551 04:24:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.551 04:24:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59440 00:07:59.551 04:24:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.551 04:24:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.551 04:24:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59440' 00:07:59.551 killing process with pid 59440 00:07:59.551 04:24:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59440 00:07:59.551 04:24:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59440 00:08:02.087 ************************************ 00:08:02.087 END TEST locking_app_on_locked_coremask 00:08:02.087 ************************************ 00:08:02.087 00:08:02.087 real 0m4.853s 00:08:02.087 user 0m4.997s 00:08:02.088 sys 0m0.758s 00:08:02.088 04:24:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.088 04:24:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:02.088 04:24:58 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:02.088 04:24:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.088 04:24:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.088 04:24:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:02.088 ************************************ 00:08:02.088 START TEST locking_overlapped_coremask 00:08:02.088 ************************************ 00:08:02.088 04:24:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:08:02.088 04:24:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59526 00:08:02.088 04:24:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:02.088 04:24:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59526 /var/tmp/spdk.sock 00:08:02.088 04:24:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59526 ']' 00:08:02.088 04:24:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.088 04:24:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.088 04:24:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.088 04:24:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.088 04:24:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:02.347 [2024-11-27 04:24:58.730870] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:02.347 [2024-11-27 04:24:58.731072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59526 ] 00:08:02.347 [2024-11-27 04:24:58.909894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:02.606 [2024-11-27 04:24:59.058782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.606 [2024-11-27 04:24:59.058964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.606 [2024-11-27 04:24:59.058967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.986 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.986 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:03.986 04:25:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:03.986 04:25:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59549 00:08:03.986 04:25:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59549 /var/tmp/spdk2.sock 00:08:03.986 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:03.986 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59549 /var/tmp/spdk2.sock 00:08:03.986 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:03.986 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.986 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:03.986 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.986 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59549 /var/tmp/spdk2.sock 00:08:03.986 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59549 ']' 00:08:03.986 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:03.986 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.986 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:03.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:03.986 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.986 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:03.986 [2024-11-27 04:25:00.230858] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:03.986 [2024-11-27 04:25:00.231113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59549 ] 00:08:03.986 [2024-11-27 04:25:00.411063] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59526 has claimed it. 00:08:03.986 [2024-11-27 04:25:00.411185] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:04.555 ERROR: process (pid: 59549) is no longer running 00:08:04.555 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59549) - No such process 00:08:04.555 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.555 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:04.555 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:04.555 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:04.555 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:04.555 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:04.555 04:25:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:04.555 04:25:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:04.555 04:25:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:04.555 04:25:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:04.555 04:25:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59526 00:08:04.555 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59526 ']' 00:08:04.555 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59526 00:08:04.555 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:08:04.555 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.555 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59526 00:08:04.555 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.555 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.555 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59526' 00:08:04.555 killing process with pid 59526 00:08:04.555 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59526 00:08:04.555 04:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59526 00:08:07.847 00:08:07.847 real 0m5.212s 00:08:07.847 user 0m13.966s 00:08:07.847 sys 0m0.794s 00:08:07.847 04:25:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.847 04:25:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:07.847 ************************************ 00:08:07.847 END TEST locking_overlapped_coremask 00:08:07.847 ************************************ 00:08:07.847 04:25:03 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:07.847 04:25:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.847 04:25:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.847 04:25:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:07.847 ************************************ 00:08:07.847 START TEST locking_overlapped_coremask_via_rpc 00:08:07.847 ************************************ 00:08:07.847 04:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:08:07.847 04:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59619 00:08:07.847 04:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:07.847 04:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59619 /var/tmp/spdk.sock 00:08:07.847 04:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59619 ']' 00:08:07.847 04:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.847 04:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.847 04:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.847 04:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.847 04:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.847 [2024-11-27 04:25:04.026444] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:07.847 [2024-11-27 04:25:04.026581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59619 ] 00:08:07.847 [2024-11-27 04:25:04.196592] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:07.847 [2024-11-27 04:25:04.196657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:07.847 [2024-11-27 04:25:04.354736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.847 [2024-11-27 04:25:04.354884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.847 [2024-11-27 04:25:04.354926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.246 04:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.247 04:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:09.247 04:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59648 00:08:09.247 04:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:09.247 04:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59648 /var/tmp/spdk2.sock 00:08:09.247 04:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59648 ']' 00:08:09.247 04:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:09.247 04:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.247 04:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:09.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:09.247 04:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.247 04:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.247 [2024-11-27 04:25:05.581572] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:09.247 [2024-11-27 04:25:05.581787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59648 ] 00:08:09.247 [2024-11-27 04:25:05.758845] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:09.247 [2024-11-27 04:25:05.758900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:09.507 [2024-11-27 04:25:06.004585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:09.507 [2024-11-27 04:25:06.008198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.507 [2024-11-27 04:25:06.008243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.048 [2024-11-27 04:25:08.176369] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59619 has claimed it. 00:08:12.048 request: 00:08:12.048 { 00:08:12.048 "method": "framework_enable_cpumask_locks", 00:08:12.048 "req_id": 1 00:08:12.048 } 00:08:12.048 Got JSON-RPC error response 00:08:12.048 response: 00:08:12.048 { 00:08:12.048 "code": -32603, 00:08:12.048 "message": "Failed to claim CPU core: 2" 00:08:12.048 } 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59619 /var/tmp/spdk.sock 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59619 ']' 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59648 /var/tmp/spdk2.sock 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59648 ']' 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:12.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:12.048 00:08:12.048 real 0m4.708s 00:08:12.048 user 0m1.328s 00:08:12.048 sys 0m0.207s 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.048 04:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.048 ************************************ 00:08:12.048 END TEST locking_overlapped_coremask_via_rpc 00:08:12.048 ************************************ 00:08:12.308 04:25:08 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:12.308 04:25:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59619 ]] 00:08:12.308 04:25:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59619 00:08:12.308 04:25:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59619 ']' 00:08:12.308 04:25:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59619 00:08:12.308 04:25:08 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:12.308 04:25:08 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.308 04:25:08 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59619 00:08:12.308 04:25:08 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:12.308 04:25:08 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:12.308 killing process with pid 59619 00:08:12.308 04:25:08 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59619' 00:08:12.308 04:25:08 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59619 00:08:12.308 04:25:08 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59619 00:08:15.598 04:25:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59648 ]] 00:08:15.598 04:25:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59648 00:08:15.598 04:25:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59648 ']' 00:08:15.598 04:25:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59648 00:08:15.598 04:25:11 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:15.598 04:25:11 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.598 04:25:11 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59648 00:08:15.598 killing process with pid 59648 00:08:15.598 04:25:11 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:15.598 04:25:11 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:15.598 04:25:11 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59648' 00:08:15.598 04:25:11 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59648 00:08:15.598 04:25:11 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59648 00:08:18.131 04:25:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:18.131 04:25:14 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:18.131 04:25:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59619 ]] 00:08:18.131 04:25:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59619 00:08:18.131 04:25:14 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59619 ']' 00:08:18.131 04:25:14 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59619 00:08:18.131 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59619) - No such process 00:08:18.131 Process with pid 59619 is not found 00:08:18.131 04:25:14 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59619 is not found' 00:08:18.131 04:25:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59648 ]] 00:08:18.131 04:25:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59648 00:08:18.132 04:25:14 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59648 ']' 00:08:18.132 04:25:14 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59648 00:08:18.132 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59648) - No such process 00:08:18.132 04:25:14 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59648 is not found' 00:08:18.132 Process with pid 59648 is not found 00:08:18.132 04:25:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:18.132 00:08:18.132 real 0m53.666s 00:08:18.132 user 1m32.494s 00:08:18.132 sys 0m7.231s 00:08:18.132 ************************************ 00:08:18.132 END TEST cpu_locks 00:08:18.132 ************************************ 00:08:18.132 04:25:14 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.132 04:25:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:18.132 ************************************ 00:08:18.132 END TEST event 00:08:18.132 ************************************ 00:08:18.132 00:08:18.132 real 1m25.590s 00:08:18.132 user 2m35.687s 00:08:18.132 sys 0m11.383s 00:08:18.132 04:25:14 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.132 04:25:14 event -- common/autotest_common.sh@10 -- # set +x 00:08:18.132 04:25:14 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:18.132 04:25:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.132 04:25:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.132 04:25:14 -- common/autotest_common.sh@10 -- # set +x 00:08:18.132 ************************************ 00:08:18.132 START TEST thread 00:08:18.132 ************************************ 00:08:18.132 04:25:14 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:18.132 * Looking for test storage... 00:08:18.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:18.132 04:25:14 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:18.132 04:25:14 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:08:18.132 04:25:14 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:18.132 04:25:14 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:18.132 04:25:14 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.132 04:25:14 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.132 04:25:14 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.132 04:25:14 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.132 04:25:14 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.132 04:25:14 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.132 04:25:14 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.132 04:25:14 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.132 04:25:14 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.132 04:25:14 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.132 04:25:14 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.132 04:25:14 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:18.132 04:25:14 thread -- scripts/common.sh@345 -- # : 1 00:08:18.132 04:25:14 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.132 04:25:14 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.132 04:25:14 thread -- scripts/common.sh@365 -- # decimal 1 00:08:18.132 04:25:14 thread -- scripts/common.sh@353 -- # local d=1 00:08:18.132 04:25:14 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.132 04:25:14 thread -- scripts/common.sh@355 -- # echo 1 00:08:18.132 04:25:14 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.132 04:25:14 thread -- scripts/common.sh@366 -- # decimal 2 00:08:18.132 04:25:14 thread -- scripts/common.sh@353 -- # local d=2 00:08:18.132 04:25:14 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.132 04:25:14 thread -- scripts/common.sh@355 -- # echo 2 00:08:18.132 04:25:14 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.132 04:25:14 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.132 04:25:14 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.132 04:25:14 thread -- scripts/common.sh@368 -- # return 0 00:08:18.132 04:25:14 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.132 04:25:14 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:18.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.132 --rc genhtml_branch_coverage=1 00:08:18.132 --rc genhtml_function_coverage=1 00:08:18.132 --rc genhtml_legend=1 00:08:18.132 --rc geninfo_all_blocks=1 00:08:18.132 --rc geninfo_unexecuted_blocks=1 00:08:18.132 00:08:18.132 ' 00:08:18.132 04:25:14 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:18.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.132 --rc genhtml_branch_coverage=1 00:08:18.132 --rc genhtml_function_coverage=1 00:08:18.132 --rc genhtml_legend=1 00:08:18.132 --rc geninfo_all_blocks=1 00:08:18.132 --rc geninfo_unexecuted_blocks=1 00:08:18.132 00:08:18.132 ' 00:08:18.132 04:25:14 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:18.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.132 --rc genhtml_branch_coverage=1 00:08:18.132 --rc genhtml_function_coverage=1 00:08:18.132 --rc genhtml_legend=1 00:08:18.132 --rc geninfo_all_blocks=1 00:08:18.132 --rc geninfo_unexecuted_blocks=1 00:08:18.132 00:08:18.132 ' 00:08:18.132 04:25:14 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:18.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.132 --rc genhtml_branch_coverage=1 00:08:18.132 --rc genhtml_function_coverage=1 00:08:18.132 --rc genhtml_legend=1 00:08:18.132 --rc geninfo_all_blocks=1 00:08:18.132 --rc geninfo_unexecuted_blocks=1 00:08:18.132 00:08:18.132 ' 00:08:18.132 04:25:14 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:18.132 04:25:14 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:18.132 04:25:14 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.132 04:25:14 thread -- common/autotest_common.sh@10 -- # set +x 00:08:18.132 ************************************ 00:08:18.132 START TEST thread_poller_perf 00:08:18.132 ************************************ 00:08:18.132 04:25:14 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:18.132 [2024-11-27 04:25:14.526246] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:18.132 [2024-11-27 04:25:14.526408] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59843 ] 00:08:18.132 [2024-11-27 04:25:14.692308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.391 [2024-11-27 04:25:14.812339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.391 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:19.769 [2024-11-27T04:25:16.356Z] ====================================== 00:08:19.769 [2024-11-27T04:25:16.356Z] busy:2302273794 (cyc) 00:08:19.769 [2024-11-27T04:25:16.356Z] total_run_count: 383000 00:08:19.769 [2024-11-27T04:25:16.356Z] tsc_hz: 2290000000 (cyc) 00:08:19.769 [2024-11-27T04:25:16.356Z] ====================================== 00:08:19.769 [2024-11-27T04:25:16.356Z] poller_cost: 6011 (cyc), 2624 (nsec) 00:08:19.769 00:08:19.769 real 0m1.565s 00:08:19.769 user 0m1.365s 00:08:19.769 sys 0m0.092s 00:08:19.769 04:25:16 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.769 04:25:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:19.769 ************************************ 00:08:19.769 END TEST thread_poller_perf 00:08:19.769 ************************************ 00:08:19.769 04:25:16 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:19.769 04:25:16 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:19.769 04:25:16 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.769 04:25:16 thread -- common/autotest_common.sh@10 -- # set +x 00:08:19.769 ************************************ 00:08:19.769 START TEST thread_poller_perf 00:08:19.769 ************************************ 00:08:19.769 04:25:16 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:19.769 [2024-11-27 04:25:16.155725] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:19.769 [2024-11-27 04:25:16.155830] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59885 ] 00:08:19.769 [2024-11-27 04:25:16.330432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.028 [2024-11-27 04:25:16.444316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.028 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:21.427 [2024-11-27T04:25:18.014Z] ====================================== 00:08:21.427 [2024-11-27T04:25:18.014Z] busy:2293796866 (cyc) 00:08:21.427 [2024-11-27T04:25:18.014Z] total_run_count: 5091000 00:08:21.427 [2024-11-27T04:25:18.014Z] tsc_hz: 2290000000 (cyc) 00:08:21.427 [2024-11-27T04:25:18.014Z] ====================================== 00:08:21.427 [2024-11-27T04:25:18.014Z] poller_cost: 450 (cyc), 196 (nsec) 00:08:21.427 ************************************ 00:08:21.427 END TEST thread_poller_perf 00:08:21.427 ************************************ 00:08:21.427 00:08:21.427 real 0m1.561s 00:08:21.427 user 0m1.355s 00:08:21.427 sys 0m0.098s 00:08:21.427 04:25:17 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.427 04:25:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:21.427 04:25:17 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:21.427 ************************************ 00:08:21.427 END TEST thread 00:08:21.427 ************************************ 00:08:21.427 00:08:21.427 real 0m3.480s 00:08:21.427 user 0m2.882s 00:08:21.427 sys 0m0.397s 00:08:21.427 04:25:17 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.427 04:25:17 thread -- common/autotest_common.sh@10 -- # set +x 00:08:21.427 04:25:17 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:21.427 04:25:17 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:21.427 04:25:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:21.427 04:25:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.427 04:25:17 -- common/autotest_common.sh@10 -- # set +x 00:08:21.427 ************************************ 00:08:21.427 START TEST app_cmdline 00:08:21.427 ************************************ 00:08:21.427 04:25:17 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:21.427 * Looking for test storage... 00:08:21.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:21.427 04:25:17 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:21.427 04:25:17 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:21.427 04:25:17 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:08:21.427 04:25:17 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.427 04:25:17 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:21.427 04:25:17 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.427 04:25:17 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:21.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.427 --rc genhtml_branch_coverage=1 00:08:21.427 --rc genhtml_function_coverage=1 00:08:21.427 --rc genhtml_legend=1 00:08:21.428 --rc geninfo_all_blocks=1 00:08:21.428 --rc geninfo_unexecuted_blocks=1 00:08:21.428 00:08:21.428 ' 00:08:21.428 04:25:17 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:21.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.428 --rc genhtml_branch_coverage=1 00:08:21.428 --rc genhtml_function_coverage=1 00:08:21.428 --rc genhtml_legend=1 00:08:21.428 --rc geninfo_all_blocks=1 00:08:21.428 --rc geninfo_unexecuted_blocks=1 00:08:21.428 00:08:21.428 ' 00:08:21.428 04:25:17 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:21.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.428 --rc genhtml_branch_coverage=1 00:08:21.428 --rc genhtml_function_coverage=1 00:08:21.428 --rc genhtml_legend=1 00:08:21.428 --rc geninfo_all_blocks=1 00:08:21.428 --rc geninfo_unexecuted_blocks=1 00:08:21.428 00:08:21.428 ' 00:08:21.428 04:25:17 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:21.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.428 --rc genhtml_branch_coverage=1 00:08:21.428 --rc genhtml_function_coverage=1 00:08:21.428 --rc genhtml_legend=1 00:08:21.428 --rc geninfo_all_blocks=1 00:08:21.428 --rc geninfo_unexecuted_blocks=1 00:08:21.428 00:08:21.428 ' 00:08:21.428 04:25:17 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:21.428 04:25:18 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59974 00:08:21.428 04:25:18 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:21.428 04:25:18 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59974 00:08:21.428 04:25:18 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59974 ']' 00:08:21.428 04:25:18 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.428 04:25:18 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.428 04:25:18 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.428 04:25:18 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.428 04:25:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:21.686 [2024-11-27 04:25:18.105170] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:21.686 [2024-11-27 04:25:18.105385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59974 ] 00:08:21.946 [2024-11-27 04:25:18.286232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.946 [2024-11-27 04:25:18.408846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.882 04:25:19 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.882 04:25:19 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:08:22.882 04:25:19 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:23.141 { 00:08:23.141 "version": "SPDK v25.01-pre git sha1 2f2acf4eb", 00:08:23.141 "fields": { 00:08:23.141 "major": 25, 00:08:23.141 "minor": 1, 00:08:23.141 "patch": 0, 00:08:23.141 "suffix": "-pre", 00:08:23.141 "commit": "2f2acf4eb" 00:08:23.141 } 00:08:23.141 } 00:08:23.141 04:25:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:23.141 04:25:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:23.141 04:25:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:23.141 04:25:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:23.141 04:25:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:23.141 04:25:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:23.141 04:25:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:23.141 04:25:19 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.141 04:25:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:23.141 04:25:19 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.141 04:25:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:23.141 04:25:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:23.141 04:25:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:23.141 04:25:19 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:08:23.141 04:25:19 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:23.141 04:25:19 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:23.141 04:25:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.141 04:25:19 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:23.141 04:25:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.141 04:25:19 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:23.141 04:25:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.141 04:25:19 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:23.141 04:25:19 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:23.141 04:25:19 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:23.401 request: 00:08:23.401 { 00:08:23.401 "method": "env_dpdk_get_mem_stats", 00:08:23.401 "req_id": 1 00:08:23.401 } 00:08:23.401 Got JSON-RPC error response 00:08:23.401 response: 00:08:23.401 { 00:08:23.401 "code": -32601, 00:08:23.401 "message": "Method not found" 00:08:23.401 } 00:08:23.401 04:25:19 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:23.401 04:25:19 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:23.401 04:25:19 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:23.401 04:25:19 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:23.401 04:25:19 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59974 00:08:23.401 04:25:19 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59974 ']' 00:08:23.401 04:25:19 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59974 00:08:23.401 04:25:19 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:23.401 04:25:19 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:23.401 04:25:19 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59974 00:08:23.401 04:25:19 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:23.401 04:25:19 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:23.401 killing process with pid 59974 00:08:23.401 04:25:19 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59974' 00:08:23.401 04:25:19 app_cmdline -- common/autotest_common.sh@973 -- # kill 59974 00:08:23.401 04:25:19 app_cmdline -- common/autotest_common.sh@978 -- # wait 59974 00:08:25.937 00:08:25.937 real 0m4.591s 00:08:25.937 user 0m4.818s 00:08:25.937 sys 0m0.646s 00:08:25.937 04:25:22 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.937 04:25:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:25.937 ************************************ 00:08:25.937 END TEST app_cmdline 00:08:25.937 ************************************ 00:08:25.937 04:25:22 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:25.937 04:25:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:25.937 04:25:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.937 04:25:22 -- common/autotest_common.sh@10 -- # set +x 00:08:25.937 ************************************ 00:08:25.937 START TEST version 00:08:25.937 ************************************ 00:08:25.937 04:25:22 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:26.196 * Looking for test storage... 00:08:26.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:26.196 04:25:22 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:26.196 04:25:22 version -- common/autotest_common.sh@1693 -- # lcov --version 00:08:26.196 04:25:22 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:26.196 04:25:22 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:26.196 04:25:22 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.196 04:25:22 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.196 04:25:22 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.196 04:25:22 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.196 04:25:22 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.196 04:25:22 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.196 04:25:22 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.196 04:25:22 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.196 04:25:22 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.196 04:25:22 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.196 04:25:22 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.196 04:25:22 version -- scripts/common.sh@344 -- # case "$op" in 00:08:26.196 04:25:22 version -- scripts/common.sh@345 -- # : 1 00:08:26.196 04:25:22 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.196 04:25:22 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.196 04:25:22 version -- scripts/common.sh@365 -- # decimal 1 00:08:26.196 04:25:22 version -- scripts/common.sh@353 -- # local d=1 00:08:26.196 04:25:22 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.196 04:25:22 version -- scripts/common.sh@355 -- # echo 1 00:08:26.196 04:25:22 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.196 04:25:22 version -- scripts/common.sh@366 -- # decimal 2 00:08:26.196 04:25:22 version -- scripts/common.sh@353 -- # local d=2 00:08:26.196 04:25:22 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.196 04:25:22 version -- scripts/common.sh@355 -- # echo 2 00:08:26.196 04:25:22 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.196 04:25:22 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.196 04:25:22 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.196 04:25:22 version -- scripts/common.sh@368 -- # return 0 00:08:26.196 04:25:22 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.196 04:25:22 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:26.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.196 --rc genhtml_branch_coverage=1 00:08:26.196 --rc genhtml_function_coverage=1 00:08:26.196 --rc genhtml_legend=1 00:08:26.196 --rc geninfo_all_blocks=1 00:08:26.196 --rc geninfo_unexecuted_blocks=1 00:08:26.196 00:08:26.196 ' 00:08:26.196 04:25:22 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:26.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.197 --rc genhtml_branch_coverage=1 00:08:26.197 --rc genhtml_function_coverage=1 00:08:26.197 --rc genhtml_legend=1 00:08:26.197 --rc geninfo_all_blocks=1 00:08:26.197 --rc geninfo_unexecuted_blocks=1 00:08:26.197 00:08:26.197 ' 00:08:26.197 04:25:22 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:26.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.197 --rc genhtml_branch_coverage=1 00:08:26.197 --rc genhtml_function_coverage=1 00:08:26.197 --rc genhtml_legend=1 00:08:26.197 --rc geninfo_all_blocks=1 00:08:26.197 --rc geninfo_unexecuted_blocks=1 00:08:26.197 00:08:26.197 ' 00:08:26.197 04:25:22 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:26.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.197 --rc genhtml_branch_coverage=1 00:08:26.197 --rc genhtml_function_coverage=1 00:08:26.197 --rc genhtml_legend=1 00:08:26.197 --rc geninfo_all_blocks=1 00:08:26.197 --rc geninfo_unexecuted_blocks=1 00:08:26.197 00:08:26.197 ' 00:08:26.197 04:25:22 version -- app/version.sh@17 -- # get_header_version major 00:08:26.197 04:25:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:26.197 04:25:22 version -- app/version.sh@14 -- # cut -f2 00:08:26.197 04:25:22 version -- app/version.sh@14 -- # tr -d '"' 00:08:26.197 04:25:22 version -- app/version.sh@17 -- # major=25 00:08:26.197 04:25:22 version -- app/version.sh@18 -- # get_header_version minor 00:08:26.197 04:25:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:26.197 04:25:22 version -- app/version.sh@14 -- # cut -f2 00:08:26.197 04:25:22 version -- app/version.sh@14 -- # tr -d '"' 00:08:26.197 04:25:22 version -- app/version.sh@18 -- # minor=1 00:08:26.197 04:25:22 version -- app/version.sh@19 -- # get_header_version patch 00:08:26.197 04:25:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:26.197 04:25:22 version -- app/version.sh@14 -- # cut -f2 00:08:26.197 04:25:22 version -- app/version.sh@14 -- # tr -d '"' 00:08:26.197 04:25:22 version -- app/version.sh@19 -- # patch=0 00:08:26.197 04:25:22 version -- app/version.sh@20 -- # get_header_version suffix 00:08:26.197 04:25:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:26.197 04:25:22 version -- app/version.sh@14 -- # cut -f2 00:08:26.197 04:25:22 version -- app/version.sh@14 -- # tr -d '"' 00:08:26.197 04:25:22 version -- app/version.sh@20 -- # suffix=-pre 00:08:26.197 04:25:22 version -- app/version.sh@22 -- # version=25.1 00:08:26.197 04:25:22 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:26.197 04:25:22 version -- app/version.sh@28 -- # version=25.1rc0 00:08:26.197 04:25:22 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:26.197 04:25:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:26.197 04:25:22 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:26.197 04:25:22 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:26.197 ************************************ 00:08:26.197 END TEST version 00:08:26.197 ************************************ 00:08:26.197 00:08:26.197 real 0m0.327s 00:08:26.197 user 0m0.194s 00:08:26.197 sys 0m0.189s 00:08:26.197 04:25:22 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.197 04:25:22 version -- common/autotest_common.sh@10 -- # set +x 00:08:26.488 04:25:22 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:26.488 04:25:22 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:08:26.488 04:25:22 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:26.488 04:25:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.488 04:25:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.488 04:25:22 -- common/autotest_common.sh@10 -- # set +x 00:08:26.488 ************************************ 00:08:26.488 START TEST bdev_raid 00:08:26.488 ************************************ 00:08:26.488 04:25:22 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:26.488 * Looking for test storage... 00:08:26.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:26.488 04:25:22 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:26.488 04:25:22 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:08:26.488 04:25:22 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:26.488 04:25:23 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@345 -- # : 1 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.488 04:25:23 bdev_raid -- scripts/common.sh@368 -- # return 0 00:08:26.488 04:25:23 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.488 04:25:23 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:26.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.488 --rc genhtml_branch_coverage=1 00:08:26.488 --rc genhtml_function_coverage=1 00:08:26.488 --rc genhtml_legend=1 00:08:26.488 --rc geninfo_all_blocks=1 00:08:26.488 --rc geninfo_unexecuted_blocks=1 00:08:26.488 00:08:26.488 ' 00:08:26.488 04:25:23 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:26.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.488 --rc genhtml_branch_coverage=1 00:08:26.488 --rc genhtml_function_coverage=1 00:08:26.488 --rc genhtml_legend=1 00:08:26.488 --rc geninfo_all_blocks=1 00:08:26.488 --rc geninfo_unexecuted_blocks=1 00:08:26.488 00:08:26.488 ' 00:08:26.488 04:25:23 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:26.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.488 --rc genhtml_branch_coverage=1 00:08:26.488 --rc genhtml_function_coverage=1 00:08:26.488 --rc genhtml_legend=1 00:08:26.488 --rc geninfo_all_blocks=1 00:08:26.488 --rc geninfo_unexecuted_blocks=1 00:08:26.488 00:08:26.488 ' 00:08:26.488 04:25:23 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:26.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.488 --rc genhtml_branch_coverage=1 00:08:26.488 --rc genhtml_function_coverage=1 00:08:26.488 --rc genhtml_legend=1 00:08:26.488 --rc geninfo_all_blocks=1 00:08:26.488 --rc geninfo_unexecuted_blocks=1 00:08:26.488 00:08:26.488 ' 00:08:26.488 04:25:23 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:26.488 04:25:23 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:08:26.488 04:25:23 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:08:26.488 04:25:23 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:08:26.488 04:25:23 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:08:26.488 04:25:23 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:08:26.488 04:25:23 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:08:26.488 04:25:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.488 04:25:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.488 04:25:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:26.488 ************************************ 00:08:26.488 START TEST raid1_resize_data_offset_test 00:08:26.488 ************************************ 00:08:26.488 04:25:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:08:26.488 04:25:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60162 00:08:26.488 04:25:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:26.488 04:25:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60162' 00:08:26.488 Process raid pid: 60162 00:08:26.488 04:25:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60162 00:08:26.488 04:25:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60162 ']' 00:08:26.488 04:25:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.488 04:25:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.488 04:25:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.488 04:25:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.488 04:25:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.747 [2024-11-27 04:25:23.149967] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:26.747 [2024-11-27 04:25:23.150225] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.747 [2024-11-27 04:25:23.328515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.007 [2024-11-27 04:25:23.447945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.265 [2024-11-27 04:25:23.664105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.265 [2024-11-27 04:25:23.664277] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.523 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.523 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:08:27.523 04:25:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:08:27.523 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.523 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.523 malloc0 00:08:27.523 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.523 04:25:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:08:27.523 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.523 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.783 malloc1 00:08:27.783 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.783 04:25:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:08:27.783 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.783 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.783 null0 00:08:27.783 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.783 04:25:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:08:27.783 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.783 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.783 [2024-11-27 04:25:24.186946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:08:27.783 [2024-11-27 04:25:24.188976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:27.783 [2024-11-27 04:25:24.189040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:08:27.783 [2024-11-27 04:25:24.189230] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:27.783 [2024-11-27 04:25:24.189248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:08:27.783 [2024-11-27 04:25:24.189548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:27.783 [2024-11-27 04:25:24.189745] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:27.783 [2024-11-27 04:25:24.189761] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:27.783 [2024-11-27 04:25:24.189937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.783 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.783 04:25:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.783 04:25:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:27.783 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.783 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.783 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.783 04:25:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:08:27.783 04:25:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:08:27.783 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.783 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.783 [2024-11-27 04:25:24.246876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:08:27.783 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.783 04:25:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:08:27.783 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.783 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.351 malloc2 00:08:28.351 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.351 04:25:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:08:28.351 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.351 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.351 [2024-11-27 04:25:24.864799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:28.351 [2024-11-27 04:25:24.886769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:28.351 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.351 [2024-11-27 04:25:24.889340] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:08:28.351 04:25:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.351 04:25:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:28.351 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.351 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.351 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.351 04:25:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:08:28.351 04:25:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60162 00:08:28.351 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60162 ']' 00:08:28.351 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60162 00:08:28.610 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:08:28.610 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.610 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60162 00:08:28.610 killing process with pid 60162 00:08:28.610 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:28.610 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:28.610 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60162' 00:08:28.610 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60162 00:08:28.610 [2024-11-27 04:25:24.979002] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:28.610 04:25:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60162 00:08:28.610 [2024-11-27 04:25:24.979880] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:08:28.610 [2024-11-27 04:25:24.980099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.610 [2024-11-27 04:25:24.980127] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:08:28.610 [2024-11-27 04:25:25.028447] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.610 [2024-11-27 04:25:25.029011] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.610 [2024-11-27 04:25:25.029126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:31.141 [2024-11-27 04:25:27.342692] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:32.522 04:25:28 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:08:32.522 00:08:32.522 real 0m5.717s 00:08:32.522 user 0m5.557s 00:08:32.522 sys 0m0.598s 00:08:32.522 04:25:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.522 04:25:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.522 ************************************ 00:08:32.522 END TEST raid1_resize_data_offset_test 00:08:32.522 ************************************ 00:08:32.522 04:25:28 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:08:32.522 04:25:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:32.522 04:25:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.522 04:25:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:32.522 ************************************ 00:08:32.522 START TEST raid0_resize_superblock_test 00:08:32.522 ************************************ 00:08:32.523 04:25:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:08:32.523 04:25:28 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:08:32.523 04:25:28 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:32.523 04:25:28 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60256 00:08:32.523 04:25:28 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60256' 00:08:32.523 Process raid pid: 60256 00:08:32.523 04:25:28 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60256 00:08:32.523 04:25:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60256 ']' 00:08:32.523 04:25:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.523 04:25:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.523 04:25:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.523 04:25:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.523 04:25:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.523 [2024-11-27 04:25:28.930884] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:32.523 [2024-11-27 04:25:28.931119] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.796 [2024-11-27 04:25:29.119344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.796 [2024-11-27 04:25:29.278477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.058 [2024-11-27 04:25:29.546556] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.058 [2024-11-27 04:25:29.546749] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.319 04:25:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.319 04:25:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:33.319 04:25:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:33.319 04:25:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.319 04:25:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.275 malloc0 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.275 [2024-11-27 04:25:30.543761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:34.275 [2024-11-27 04:25:30.543996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.275 [2024-11-27 04:25:30.544037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:34.275 [2024-11-27 04:25:30.544058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.275 [2024-11-27 04:25:30.547068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.275 [2024-11-27 04:25:30.547213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:34.275 pt0 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.275 e291f7ad-eb97-4122-97cb-b2904ba01e98 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.275 a1072db3-2f3d-4e11-a39c-6ba6a7698888 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.275 557f9a53-6529-4fce-8f29-d3291bbb3ea8 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.275 [2024-11-27 04:25:30.756184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev a1072db3-2f3d-4e11-a39c-6ba6a7698888 is claimed 00:08:34.275 [2024-11-27 04:25:30.756555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 557f9a53-6529-4fce-8f29-d3291bbb3ea8 is claimed 00:08:34.275 [2024-11-27 04:25:30.756819] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:34.275 [2024-11-27 04:25:30.756883] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:08:34.275 [2024-11-27 04:25:30.757380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:34.275 [2024-11-27 04:25:30.757739] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:34.275 [2024-11-27 04:25:30.757811] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:34.275 [2024-11-27 04:25:30.758114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:34.275 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:08:34.275 [2024-11-27 04:25:30.852373] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.535 [2024-11-27 04:25:30.904180] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:34.535 [2024-11-27 04:25:30.904263] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'a1072db3-2f3d-4e11-a39c-6ba6a7698888' was resized: old size 131072, new size 204800 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.535 [2024-11-27 04:25:30.916149] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:34.535 [2024-11-27 04:25:30.916202] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '557f9a53-6529-4fce-8f29-d3291bbb3ea8' was resized: old size 131072, new size 204800 00:08:34.535 [2024-11-27 04:25:30.916273] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:34.535 04:25:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:34.535 04:25:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.535 04:25:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:34.535 04:25:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:08:34.535 [2024-11-27 04:25:31.007989] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.535 04:25:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:34.535 04:25:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:34.535 04:25:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:08:34.535 04:25:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:34.535 04:25:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.535 04:25:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.535 [2024-11-27 04:25:31.051672] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:34.535 [2024-11-27 04:25:31.051786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:34.535 [2024-11-27 04:25:31.051806] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:34.535 [2024-11-27 04:25:31.051824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:34.535 [2024-11-27 04:25:31.051979] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:34.535 [2024-11-27 04:25:31.052018] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:34.535 [2024-11-27 04:25:31.052034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:34.535 04:25:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.535 04:25:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:34.535 04:25:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.535 04:25:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.535 [2024-11-27 04:25:31.063523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:34.535 [2024-11-27 04:25:31.063629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.535 [2024-11-27 04:25:31.063657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:34.535 [2024-11-27 04:25:31.063671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.535 [2024-11-27 04:25:31.066723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.535 [2024-11-27 04:25:31.066798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:34.535 pt0 00:08:34.535 04:25:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.535 04:25:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:34.535 [2024-11-27 04:25:31.069092] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev a1072db3-2f3d-4e11-a39c-6ba6a7698888 00:08:34.535 [2024-11-27 04:25:31.069188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev a1072db3-2f3d-4e11-a39c-6ba6a7698888 is claimed 00:08:34.535 [2024-11-27 04:25:31.069345] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 557f9a53-6529-4fce-8f29-d3291bbb3ea8 00:08:34.535 [2024-11-27 04:25:31.069370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 557f9a53-6529-4fce-8f29-d3291bbb3ea8 is claimed 00:08:34.535 04:25:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.535 [2024-11-27 04:25:31.069554] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 557f9a53-6529-4fce-8f29-d3291bbb3ea8 (2) smaller than existing raid bdev Raid (3) 00:08:34.536 [2024-11-27 04:25:31.069583] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev a1072db3-2f3d-4e11-a39c-6ba6a7698888: File exists 00:08:34.536 [2024-11-27 04:25:31.069628] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:34.536 [2024-11-27 04:25:31.069643] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:08:34.536 04:25:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.536 [2024-11-27 04:25:31.069953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:34.536 [2024-11-27 04:25:31.070133] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:34.536 [2024-11-27 04:25:31.070143] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:34.536 [2024-11-27 04:25:31.070313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.536 04:25:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.536 04:25:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:34.536 04:25:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:34.536 04:25:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:34.536 04:25:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:08:34.536 04:25:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.536 04:25:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.536 [2024-11-27 04:25:31.091835] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.536 04:25:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.795 04:25:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:34.795 04:25:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:34.795 04:25:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:08:34.795 04:25:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60256 00:08:34.795 04:25:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60256 ']' 00:08:34.795 04:25:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60256 00:08:34.795 04:25:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:34.795 04:25:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.795 04:25:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60256 00:08:34.795 killing process with pid 60256 00:08:34.795 04:25:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.795 04:25:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.795 04:25:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60256' 00:08:34.795 04:25:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60256 00:08:34.795 [2024-11-27 04:25:31.159009] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:34.795 04:25:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60256 00:08:34.795 [2024-11-27 04:25:31.159162] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:34.795 [2024-11-27 04:25:31.159229] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:34.795 [2024-11-27 04:25:31.159241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:36.697 [2024-11-27 04:25:32.951597] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:38.073 04:25:34 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:38.073 00:08:38.073 real 0m5.529s 00:08:38.073 user 0m5.559s 00:08:38.073 sys 0m0.749s 00:08:38.073 ************************************ 00:08:38.073 END TEST raid0_resize_superblock_test 00:08:38.073 ************************************ 00:08:38.073 04:25:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.073 04:25:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.073 04:25:34 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:08:38.073 04:25:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:38.073 04:25:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.073 04:25:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:38.073 ************************************ 00:08:38.073 START TEST raid1_resize_superblock_test 00:08:38.073 ************************************ 00:08:38.073 04:25:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:08:38.073 04:25:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:08:38.073 04:25:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60366 00:08:38.073 Process raid pid: 60366 00:08:38.073 04:25:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:38.073 04:25:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60366' 00:08:38.073 04:25:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60366 00:08:38.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.074 04:25:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60366 ']' 00:08:38.074 04:25:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.074 04:25:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.074 04:25:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.074 04:25:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.074 04:25:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.074 [2024-11-27 04:25:34.532946] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:38.074 [2024-11-27 04:25:34.533168] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.334 [2024-11-27 04:25:34.711628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.334 [2024-11-27 04:25:34.869388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.594 [2024-11-27 04:25:35.140146] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.594 [2024-11-27 04:25:35.140320] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.855 04:25:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.855 04:25:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:38.855 04:25:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:38.855 04:25:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.855 04:25:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.796 malloc0 00:08:39.796 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.796 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:39.796 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.796 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.796 [2024-11-27 04:25:36.113124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:39.796 [2024-11-27 04:25:36.113226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.796 [2024-11-27 04:25:36.113256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:39.796 [2024-11-27 04:25:36.113276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.796 [2024-11-27 04:25:36.116250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.796 [2024-11-27 04:25:36.116324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:39.796 pt0 00:08:39.796 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.796 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:39.796 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.796 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.796 c9334d42-6d33-4f12-9511-c2d4981f2cf5 00:08:39.796 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.796 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:39.796 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.796 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.796 75fb8a99-7a77-47e0-b653-3afc4dbafeab 00:08:39.796 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.796 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:39.796 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.796 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.796 7ee68101-eb82-440d-a9a8-eb1873dc6b34 00:08:39.796 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.796 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:39.796 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:39.796 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.796 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.796 [2024-11-27 04:25:36.324037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 75fb8a99-7a77-47e0-b653-3afc4dbafeab is claimed 00:08:39.796 [2024-11-27 04:25:36.324387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7ee68101-eb82-440d-a9a8-eb1873dc6b34 is claimed 00:08:39.796 [2024-11-27 04:25:36.324635] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:39.797 [2024-11-27 04:25:36.324658] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:08:39.797 [2024-11-27 04:25:36.325080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:39.797 [2024-11-27 04:25:36.325372] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:39.797 [2024-11-27 04:25:36.325386] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:39.797 [2024-11-27 04:25:36.325617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.797 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.797 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:39.797 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.797 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:39.797 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.797 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.797 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:40.057 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:40.057 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:40.057 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.057 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.057 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.058 [2024-11-27 04:25:36.440178] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.058 [2024-11-27 04:25:36.484114] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:40.058 [2024-11-27 04:25:36.484166] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '75fb8a99-7a77-47e0-b653-3afc4dbafeab' was resized: old size 131072, new size 204800 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.058 [2024-11-27 04:25:36.491947] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:40.058 [2024-11-27 04:25:36.491994] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '7ee68101-eb82-440d-a9a8-eb1873dc6b34' was resized: old size 131072, new size 204800 00:08:40.058 [2024-11-27 04:25:36.492034] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.058 [2024-11-27 04:25:36.591870] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.058 [2024-11-27 04:25:36.635543] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:40.058 [2024-11-27 04:25:36.635748] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:40.058 [2024-11-27 04:25:36.635806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:40.058 [2024-11-27 04:25:36.636016] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:40.058 [2024-11-27 04:25:36.636324] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:40.058 [2024-11-27 04:25:36.636405] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:40.058 [2024-11-27 04:25:36.636422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.058 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.318 [2024-11-27 04:25:36.647354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:40.318 [2024-11-27 04:25:36.647496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.318 [2024-11-27 04:25:36.647543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:40.318 [2024-11-27 04:25:36.647587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.318 [2024-11-27 04:25:36.650627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.318 [2024-11-27 04:25:36.650721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:40.318 pt0 00:08:40.318 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.318 [2024-11-27 04:25:36.652885] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 75fb8a99-7a77-47e0-b653-3afc4dbafeab 00:08:40.318 [2024-11-27 04:25:36.653052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 75fb8a99-7a77-47e0-b653-3afc4dbafeab is claimed 00:08:40.318 [2024-11-27 04:25:36.653283] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 7ee68101-eb82-440d-a9a8-eb1873dc6b34 00:08:40.318 [2024-11-27 04:25:36.653356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7ee68101-eb82-440d-a9a8-eb1873dc6b34 is claimed 00:08:40.318 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:40.318 [2024-11-27 04:25:36.653568] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 7ee68101-eb82-440d-a9a8-eb1873dc6b34 (2) smaller than existing raid bdev Raid (3) 00:08:40.318 [2024-11-27 04:25:36.653600] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 75fb8a99-7a77-47e0-b653-3afc4dbafeab: File exists 00:08:40.318 [2024-11-27 04:25:36.653646] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:40.318 [2024-11-27 04:25:36.653661] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:40.318 [2024-11-27 04:25:36.653962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:40.318 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.318 [2024-11-27 04:25:36.654200] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:40.318 [2024-11-27 04:25:36.654212] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:40.318 [2024-11-27 04:25:36.654401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.318 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.318 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.318 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:40.318 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:40.318 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.318 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:40.318 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:08:40.318 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.318 [2024-11-27 04:25:36.675710] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.318 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.318 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:40.318 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:40.318 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:08:40.318 04:25:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60366 00:08:40.319 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60366 ']' 00:08:40.319 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60366 00:08:40.319 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:40.319 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:40.319 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60366 00:08:40.319 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:40.319 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:40.319 killing process with pid 60366 00:08:40.319 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60366' 00:08:40.319 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60366 00:08:40.319 [2024-11-27 04:25:36.751236] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:40.319 [2024-11-27 04:25:36.751400] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:40.319 [2024-11-27 04:25:36.751470] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:40.319 [2024-11-27 04:25:36.751480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:40.319 04:25:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60366 00:08:42.226 [2024-11-27 04:25:38.475546] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:43.605 ************************************ 00:08:43.605 END TEST raid1_resize_superblock_test 00:08:43.605 ************************************ 00:08:43.605 04:25:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:43.605 00:08:43.605 real 0m5.476s 00:08:43.605 user 0m5.561s 00:08:43.605 sys 0m0.725s 00:08:43.605 04:25:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.605 04:25:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.605 04:25:39 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:08:43.605 04:25:39 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:08:43.605 04:25:39 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:08:43.605 04:25:39 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:08:43.605 04:25:39 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:08:43.605 04:25:39 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:08:43.605 04:25:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:43.605 04:25:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.605 04:25:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:43.605 ************************************ 00:08:43.605 START TEST raid_function_test_raid0 00:08:43.605 ************************************ 00:08:43.605 04:25:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:08:43.605 04:25:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:08:43.605 04:25:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:43.605 04:25:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:43.605 04:25:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60474 00:08:43.605 04:25:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:43.605 04:25:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60474' 00:08:43.605 Process raid pid: 60474 00:08:43.605 04:25:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60474 00:08:43.605 04:25:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60474 ']' 00:08:43.605 04:25:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.605 04:25:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.605 04:25:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.605 04:25:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.605 04:25:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:43.605 [2024-11-27 04:25:40.104386] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:43.605 [2024-11-27 04:25:40.104667] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.865 [2024-11-27 04:25:40.305723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.124 [2024-11-27 04:25:40.458894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.382 [2024-11-27 04:25:40.728338] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.383 [2024-11-27 04:25:40.728403] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.642 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.642 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:08:44.642 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:44.642 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.642 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:44.642 Base_1 00:08:44.643 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.643 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:44.643 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.643 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:44.643 Base_2 00:08:44.643 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.643 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:08:44.643 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.643 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:44.643 [2024-11-27 04:25:41.135461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:44.643 [2024-11-27 04:25:41.137971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:44.643 [2024-11-27 04:25:41.138171] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:44.643 [2024-11-27 04:25:41.138192] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:44.643 [2024-11-27 04:25:41.138530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:44.643 [2024-11-27 04:25:41.138717] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:44.643 [2024-11-27 04:25:41.138728] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:44.643 [2024-11-27 04:25:41.138921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.643 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.643 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:44.643 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:44.643 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.643 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:44.643 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.643 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:44.643 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:44.643 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:44.643 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:44.643 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:44.643 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:44.643 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:44.643 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:44.643 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:08:44.643 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:44.643 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:44.643 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:44.903 [2024-11-27 04:25:41.383185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:44.903 /dev/nbd0 00:08:44.903 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:44.903 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:44.903 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:44.903 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:08:44.903 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:44.903 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:44.903 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:44.903 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:08:44.903 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:44.903 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:44.903 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:44.903 1+0 records in 00:08:44.903 1+0 records out 00:08:44.903 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451816 s, 9.1 MB/s 00:08:44.903 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.903 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:08:44.903 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.903 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:44.903 04:25:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:08:44.903 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:44.903 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:44.903 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:44.903 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:44.903 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:45.165 { 00:08:45.165 "nbd_device": "/dev/nbd0", 00:08:45.165 "bdev_name": "raid" 00:08:45.165 } 00:08:45.165 ]' 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:45.165 { 00:08:45.165 "nbd_device": "/dev/nbd0", 00:08:45.165 "bdev_name": "raid" 00:08:45.165 } 00:08:45.165 ]' 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:45.165 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:45.425 4096+0 records in 00:08:45.425 4096+0 records out 00:08:45.425 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0326556 s, 64.2 MB/s 00:08:45.425 04:25:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:45.425 4096+0 records in 00:08:45.425 4096+0 records out 00:08:45.425 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.216506 s, 9.7 MB/s 00:08:45.425 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:45.683 128+0 records in 00:08:45.683 128+0 records out 00:08:45.683 65536 bytes (66 kB, 64 KiB) copied, 0.00139452 s, 47.0 MB/s 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:45.683 2035+0 records in 00:08:45.683 2035+0 records out 00:08:45.683 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0144809 s, 72.0 MB/s 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:45.683 456+0 records in 00:08:45.683 456+0 records out 00:08:45.683 233472 bytes (233 kB, 228 KiB) copied, 0.00298128 s, 78.3 MB/s 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:08:45.683 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:45.684 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:45.684 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:45.684 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:45.684 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:08:45.684 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.684 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:45.943 [2024-11-27 04:25:42.347584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.943 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:45.943 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:45.943 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:45.943 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.943 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.943 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:45.943 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:08:45.943 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.943 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:45.943 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:45.943 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:46.203 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:46.203 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:46.203 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:46.203 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:46.203 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:46.203 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:46.203 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:08:46.203 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:08:46.203 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:46.203 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:08:46.203 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:46.203 04:25:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60474 00:08:46.203 04:25:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60474 ']' 00:08:46.203 04:25:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60474 00:08:46.203 04:25:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:08:46.203 04:25:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.203 04:25:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60474 00:08:46.203 killing process with pid 60474 00:08:46.203 04:25:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.203 04:25:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.203 04:25:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60474' 00:08:46.203 04:25:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60474 00:08:46.203 [2024-11-27 04:25:42.721785] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:46.203 [2024-11-27 04:25:42.721916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.203 04:25:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60474 00:08:46.203 [2024-11-27 04:25:42.721972] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.203 [2024-11-27 04:25:42.721989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:46.462 [2024-11-27 04:25:42.941506] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.842 04:25:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:08:47.842 00:08:47.842 real 0m4.099s 00:08:47.842 user 0m4.722s 00:08:47.842 sys 0m1.082s 00:08:47.842 04:25:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.842 ************************************ 00:08:47.842 END TEST raid_function_test_raid0 00:08:47.842 ************************************ 00:08:47.842 04:25:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:47.842 04:25:44 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:08:47.842 04:25:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:47.842 04:25:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.842 04:25:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:47.842 ************************************ 00:08:47.842 START TEST raid_function_test_concat 00:08:47.842 ************************************ 00:08:47.842 04:25:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:08:47.842 04:25:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:08:47.842 04:25:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:47.842 04:25:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:47.842 04:25:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60603 00:08:47.842 04:25:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:47.842 04:25:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60603' 00:08:47.842 Process raid pid: 60603 00:08:47.842 04:25:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60603 00:08:47.842 04:25:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60603 ']' 00:08:47.842 04:25:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.842 04:25:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.842 04:25:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.842 04:25:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.842 04:25:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:47.842 [2024-11-27 04:25:44.263591] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:47.842 [2024-11-27 04:25:44.263794] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.101 [2024-11-27 04:25:44.442125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.101 [2024-11-27 04:25:44.558237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.358 [2024-11-27 04:25:44.767478] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.358 [2024-11-27 04:25:44.767623] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.618 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.618 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:08:48.618 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:48.618 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.618 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:48.618 Base_1 00:08:48.618 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.618 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:48.618 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.618 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:48.878 Base_2 00:08:48.878 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.878 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:08:48.878 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.878 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:48.878 [2024-11-27 04:25:45.213550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:48.878 [2024-11-27 04:25:45.215647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:48.878 [2024-11-27 04:25:45.215741] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:48.878 [2024-11-27 04:25:45.215754] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:48.878 [2024-11-27 04:25:45.216014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:48.878 [2024-11-27 04:25:45.216207] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:48.878 [2024-11-27 04:25:45.216220] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:48.878 [2024-11-27 04:25:45.216403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.878 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.878 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:48.878 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:48.878 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.878 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:48.878 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.878 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:48.878 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:48.878 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:48.878 04:25:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:48.878 04:25:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:48.878 04:25:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:48.878 04:25:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:48.878 04:25:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:48.878 04:25:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:08:48.878 04:25:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:48.878 04:25:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:48.878 04:25:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:49.137 [2024-11-27 04:25:45.489184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:49.137 /dev/nbd0 00:08:49.137 04:25:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:49.137 04:25:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:49.137 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:49.137 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:08:49.137 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:49.137 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:49.137 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:49.137 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:08:49.137 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:49.137 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:49.137 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:49.137 1+0 records in 00:08:49.137 1+0 records out 00:08:49.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000605427 s, 6.8 MB/s 00:08:49.137 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.137 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:08:49.137 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.137 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:49.137 04:25:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:08:49.137 04:25:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:49.137 04:25:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:49.137 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:49.137 04:25:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:49.137 04:25:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:49.396 { 00:08:49.396 "nbd_device": "/dev/nbd0", 00:08:49.396 "bdev_name": "raid" 00:08:49.396 } 00:08:49.396 ]' 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:49.396 { 00:08:49.396 "nbd_device": "/dev/nbd0", 00:08:49.396 "bdev_name": "raid" 00:08:49.396 } 00:08:49.396 ]' 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:49.396 4096+0 records in 00:08:49.396 4096+0 records out 00:08:49.396 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0345939 s, 60.6 MB/s 00:08:49.396 04:25:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:49.655 4096+0 records in 00:08:49.655 4096+0 records out 00:08:49.655 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.226793 s, 9.2 MB/s 00:08:49.655 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:49.655 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:49.655 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:49.655 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:49.655 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:49.655 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:49.655 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:49.655 128+0 records in 00:08:49.655 128+0 records out 00:08:49.655 65536 bytes (66 kB, 64 KiB) copied, 0.00112536 s, 58.2 MB/s 00:08:49.655 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:49.655 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:49.655 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:49.655 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:49.655 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:49.655 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:49.655 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:49.655 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:49.655 2035+0 records in 00:08:49.655 2035+0 records out 00:08:49.655 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0146727 s, 71.0 MB/s 00:08:49.655 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:49.655 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:49.655 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:49.655 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:49.655 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:49.655 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:49.655 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:49.655 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:49.655 456+0 records in 00:08:49.655 456+0 records out 00:08:49.655 233472 bytes (233 kB, 228 KiB) copied, 0.00237852 s, 98.2 MB/s 00:08:49.655 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:49.655 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:49.655 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:49.922 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:49.922 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:49.922 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:08:49.922 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:49.922 04:25:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:49.922 04:25:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:49.922 04:25:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:49.922 04:25:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:08:49.922 04:25:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.922 04:25:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:49.922 [2024-11-27 04:25:46.463846] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.922 04:25:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:49.922 04:25:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:49.922 04:25:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:49.922 04:25:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.922 04:25:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.922 04:25:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:49.922 04:25:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:08:49.922 04:25:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.922 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:49.922 04:25:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:49.922 04:25:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:50.183 04:25:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:50.183 04:25:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:50.183 04:25:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:50.183 04:25:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:50.183 04:25:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:50.183 04:25:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:50.183 04:25:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:08:50.183 04:25:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:08:50.183 04:25:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:50.183 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:08:50.183 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:50.183 04:25:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60603 00:08:50.183 04:25:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60603 ']' 00:08:50.183 04:25:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60603 00:08:50.183 04:25:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:08:50.183 04:25:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.183 04:25:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60603 00:08:50.442 04:25:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.442 04:25:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.442 04:25:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60603' 00:08:50.442 killing process with pid 60603 00:08:50.442 04:25:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60603 00:08:50.442 [2024-11-27 04:25:46.796704] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:50.442 [2024-11-27 04:25:46.796815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.442 [2024-11-27 04:25:46.796869] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.442 [2024-11-27 04:25:46.796883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:50.442 04:25:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60603 00:08:50.442 [2024-11-27 04:25:47.013702] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:51.840 04:25:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:08:51.840 00:08:51.840 real 0m3.990s 00:08:51.840 user 0m4.702s 00:08:51.840 sys 0m0.930s 00:08:51.840 ************************************ 00:08:51.840 END TEST raid_function_test_concat 00:08:51.840 ************************************ 00:08:51.840 04:25:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.840 04:25:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:51.840 04:25:48 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:08:51.840 04:25:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:51.840 04:25:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.840 04:25:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:51.840 ************************************ 00:08:51.840 START TEST raid0_resize_test 00:08:51.840 ************************************ 00:08:51.840 04:25:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:08:51.840 04:25:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:08:51.840 04:25:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:51.840 04:25:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:51.840 04:25:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:51.840 04:25:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:51.840 04:25:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:51.840 04:25:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:51.840 04:25:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:51.840 04:25:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60732 00:08:51.840 04:25:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60732' 00:08:51.841 Process raid pid: 60732 00:08:51.841 04:25:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:51.841 04:25:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60732 00:08:51.841 04:25:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60732 ']' 00:08:51.841 04:25:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.841 04:25:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.841 04:25:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.841 04:25:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.841 04:25:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.841 [2024-11-27 04:25:48.333018] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:51.841 [2024-11-27 04:25:48.333167] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.099 [2024-11-27 04:25:48.507418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.099 [2024-11-27 04:25:48.625779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.358 [2024-11-27 04:25:48.838469] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.358 [2024-11-27 04:25:48.838515] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.928 Base_1 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.928 Base_2 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.928 [2024-11-27 04:25:49.254823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:52.928 [2024-11-27 04:25:49.256980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:52.928 [2024-11-27 04:25:49.257037] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:52.928 [2024-11-27 04:25:49.257049] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:52.928 [2024-11-27 04:25:49.257368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:52.928 [2024-11-27 04:25:49.257496] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:52.928 [2024-11-27 04:25:49.257510] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:52.928 [2024-11-27 04:25:49.257684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.928 [2024-11-27 04:25:49.262796] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:52.928 [2024-11-27 04:25:49.262864] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:52.928 true 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.928 [2024-11-27 04:25:49.274963] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.928 [2024-11-27 04:25:49.322700] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:52.928 [2024-11-27 04:25:49.322727] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:52.928 [2024-11-27 04:25:49.322761] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:08:52.928 true 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:52.928 [2024-11-27 04:25:49.334874] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60732 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60732 ']' 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60732 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60732 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60732' 00:08:52.928 killing process with pid 60732 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60732 00:08:52.928 [2024-11-27 04:25:49.415099] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:52.928 [2024-11-27 04:25:49.415264] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.928 04:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60732 00:08:52.928 [2024-11-27 04:25:49.415346] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:52.928 [2024-11-27 04:25:49.415358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:52.928 [2024-11-27 04:25:49.433897] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:54.309 04:25:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:54.309 00:08:54.309 real 0m2.364s 00:08:54.309 user 0m2.549s 00:08:54.309 sys 0m0.338s 00:08:54.309 04:25:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.309 04:25:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.309 ************************************ 00:08:54.309 END TEST raid0_resize_test 00:08:54.309 ************************************ 00:08:54.309 04:25:50 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:08:54.309 04:25:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:54.309 04:25:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.309 04:25:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:54.309 ************************************ 00:08:54.309 START TEST raid1_resize_test 00:08:54.309 ************************************ 00:08:54.309 04:25:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:08:54.309 04:25:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:08:54.310 04:25:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:54.310 04:25:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:54.310 04:25:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:54.310 04:25:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:54.310 04:25:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:54.310 04:25:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:54.310 04:25:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:54.310 Process raid pid: 60788 00:08:54.310 04:25:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60788 00:08:54.310 04:25:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:54.310 04:25:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60788' 00:08:54.310 04:25:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60788 00:08:54.310 04:25:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60788 ']' 00:08:54.310 04:25:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.310 04:25:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.310 04:25:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.310 04:25:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.310 04:25:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.310 [2024-11-27 04:25:50.762865] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:54.310 [2024-11-27 04:25:50.763002] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.569 [2024-11-27 04:25:50.938559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.569 [2024-11-27 04:25:51.057516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.829 [2024-11-27 04:25:51.269843] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.829 [2024-11-27 04:25:51.269981] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.090 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:55.090 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:08:55.090 04:25:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:55.090 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.090 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.090 Base_1 00:08:55.090 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.090 04:25:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:55.090 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.090 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.090 Base_2 00:08:55.090 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.090 04:25:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:08:55.090 04:25:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:55.090 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.090 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.090 [2024-11-27 04:25:51.655400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:55.090 [2024-11-27 04:25:51.657369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:55.090 [2024-11-27 04:25:51.657444] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:55.090 [2024-11-27 04:25:51.657455] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:55.090 [2024-11-27 04:25:51.657704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:55.090 [2024-11-27 04:25:51.657827] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:55.090 [2024-11-27 04:25:51.657835] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:55.090 [2024-11-27 04:25:51.657977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.090 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.090 04:25:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:55.090 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.090 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.090 [2024-11-27 04:25:51.667370] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:55.090 [2024-11-27 04:25:51.667400] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:55.090 true 00:08:55.090 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.350 [2024-11-27 04:25:51.683545] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.350 [2024-11-27 04:25:51.727306] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:55.350 [2024-11-27 04:25:51.727338] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:55.350 [2024-11-27 04:25:51.727376] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:08:55.350 true 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:55.350 [2024-11-27 04:25:51.739462] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60788 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60788 ']' 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60788 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60788 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60788' 00:08:55.350 killing process with pid 60788 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60788 00:08:55.350 [2024-11-27 04:25:51.827515] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:55.350 [2024-11-27 04:25:51.827687] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.350 04:25:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60788 00:08:55.350 [2024-11-27 04:25:51.828265] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.350 [2024-11-27 04:25:51.828343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:55.350 [2024-11-27 04:25:51.847399] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:56.733 04:25:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:56.733 00:08:56.733 real 0m2.333s 00:08:56.733 user 0m2.499s 00:08:56.733 sys 0m0.337s 00:08:56.733 04:25:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.733 04:25:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.733 ************************************ 00:08:56.733 END TEST raid1_resize_test 00:08:56.733 ************************************ 00:08:56.733 04:25:53 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:56.733 04:25:53 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:56.733 04:25:53 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:08:56.733 04:25:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:56.733 04:25:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.733 04:25:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:56.733 ************************************ 00:08:56.733 START TEST raid_state_function_test 00:08:56.733 ************************************ 00:08:56.733 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:08:56.733 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:56.733 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:56.733 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:56.733 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:56.733 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:56.733 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.733 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:56.733 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:56.734 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.734 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:56.734 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:56.734 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.734 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:56.734 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:56.734 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:56.734 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:56.734 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:56.734 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:56.734 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:56.734 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:56.734 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:56.734 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:56.734 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:56.734 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60851 00:08:56.734 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:56.734 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60851' 00:08:56.734 Process raid pid: 60851 00:08:56.734 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60851 00:08:56.734 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60851 ']' 00:08:56.734 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.734 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.734 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.734 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.734 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.734 [2024-11-27 04:25:53.180756] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:56.734 [2024-11-27 04:25:53.180892] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.994 [2024-11-27 04:25:53.356443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.994 [2024-11-27 04:25:53.475236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.254 [2024-11-27 04:25:53.680606] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.254 [2024-11-27 04:25:53.680652] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.513 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.513 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:57.514 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:57.514 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.514 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.514 [2024-11-27 04:25:54.031487] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:57.514 [2024-11-27 04:25:54.031575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:57.514 [2024-11-27 04:25:54.031588] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:57.514 [2024-11-27 04:25:54.031601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:57.514 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.514 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:57.514 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.514 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.514 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.514 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.514 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:57.514 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.514 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.514 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.514 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.514 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.514 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.514 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.514 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.514 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.514 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.514 "name": "Existed_Raid", 00:08:57.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.514 "strip_size_kb": 64, 00:08:57.514 "state": "configuring", 00:08:57.514 "raid_level": "raid0", 00:08:57.514 "superblock": false, 00:08:57.514 "num_base_bdevs": 2, 00:08:57.514 "num_base_bdevs_discovered": 0, 00:08:57.514 "num_base_bdevs_operational": 2, 00:08:57.514 "base_bdevs_list": [ 00:08:57.514 { 00:08:57.514 "name": "BaseBdev1", 00:08:57.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.514 "is_configured": false, 00:08:57.514 "data_offset": 0, 00:08:57.514 "data_size": 0 00:08:57.514 }, 00:08:57.514 { 00:08:57.514 "name": "BaseBdev2", 00:08:57.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.514 "is_configured": false, 00:08:57.514 "data_offset": 0, 00:08:57.514 "data_size": 0 00:08:57.514 } 00:08:57.514 ] 00:08:57.514 }' 00:08:57.514 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.514 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.083 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:58.083 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.083 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.083 [2024-11-27 04:25:54.458726] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:58.083 [2024-11-27 04:25:54.458892] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:58.083 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.083 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:58.083 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.083 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.083 [2024-11-27 04:25:54.470710] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:58.083 [2024-11-27 04:25:54.470876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:58.083 [2024-11-27 04:25:54.470910] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.083 [2024-11-27 04:25:54.470941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.083 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.083 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:58.083 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.084 [2024-11-27 04:25:54.533230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.084 BaseBdev1 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.084 [ 00:08:58.084 { 00:08:58.084 "name": "BaseBdev1", 00:08:58.084 "aliases": [ 00:08:58.084 "1b2fbcf4-de7c-46a9-b8fc-d64247c7e10a" 00:08:58.084 ], 00:08:58.084 "product_name": "Malloc disk", 00:08:58.084 "block_size": 512, 00:08:58.084 "num_blocks": 65536, 00:08:58.084 "uuid": "1b2fbcf4-de7c-46a9-b8fc-d64247c7e10a", 00:08:58.084 "assigned_rate_limits": { 00:08:58.084 "rw_ios_per_sec": 0, 00:08:58.084 "rw_mbytes_per_sec": 0, 00:08:58.084 "r_mbytes_per_sec": 0, 00:08:58.084 "w_mbytes_per_sec": 0 00:08:58.084 }, 00:08:58.084 "claimed": true, 00:08:58.084 "claim_type": "exclusive_write", 00:08:58.084 "zoned": false, 00:08:58.084 "supported_io_types": { 00:08:58.084 "read": true, 00:08:58.084 "write": true, 00:08:58.084 "unmap": true, 00:08:58.084 "flush": true, 00:08:58.084 "reset": true, 00:08:58.084 "nvme_admin": false, 00:08:58.084 "nvme_io": false, 00:08:58.084 "nvme_io_md": false, 00:08:58.084 "write_zeroes": true, 00:08:58.084 "zcopy": true, 00:08:58.084 "get_zone_info": false, 00:08:58.084 "zone_management": false, 00:08:58.084 "zone_append": false, 00:08:58.084 "compare": false, 00:08:58.084 "compare_and_write": false, 00:08:58.084 "abort": true, 00:08:58.084 "seek_hole": false, 00:08:58.084 "seek_data": false, 00:08:58.084 "copy": true, 00:08:58.084 "nvme_iov_md": false 00:08:58.084 }, 00:08:58.084 "memory_domains": [ 00:08:58.084 { 00:08:58.084 "dma_device_id": "system", 00:08:58.084 "dma_device_type": 1 00:08:58.084 }, 00:08:58.084 { 00:08:58.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.084 "dma_device_type": 2 00:08:58.084 } 00:08:58.084 ], 00:08:58.084 "driver_specific": {} 00:08:58.084 } 00:08:58.084 ] 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.084 "name": "Existed_Raid", 00:08:58.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.084 "strip_size_kb": 64, 00:08:58.084 "state": "configuring", 00:08:58.084 "raid_level": "raid0", 00:08:58.084 "superblock": false, 00:08:58.084 "num_base_bdevs": 2, 00:08:58.084 "num_base_bdevs_discovered": 1, 00:08:58.084 "num_base_bdevs_operational": 2, 00:08:58.084 "base_bdevs_list": [ 00:08:58.084 { 00:08:58.084 "name": "BaseBdev1", 00:08:58.084 "uuid": "1b2fbcf4-de7c-46a9-b8fc-d64247c7e10a", 00:08:58.084 "is_configured": true, 00:08:58.084 "data_offset": 0, 00:08:58.084 "data_size": 65536 00:08:58.084 }, 00:08:58.084 { 00:08:58.084 "name": "BaseBdev2", 00:08:58.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.084 "is_configured": false, 00:08:58.084 "data_offset": 0, 00:08:58.084 "data_size": 0 00:08:58.084 } 00:08:58.084 ] 00:08:58.084 }' 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.084 04:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.688 [2024-11-27 04:25:55.060454] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:58.688 [2024-11-27 04:25:55.060649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.688 [2024-11-27 04:25:55.072519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.688 [2024-11-27 04:25:55.075022] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.688 [2024-11-27 04:25:55.075099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.688 "name": "Existed_Raid", 00:08:58.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.688 "strip_size_kb": 64, 00:08:58.688 "state": "configuring", 00:08:58.688 "raid_level": "raid0", 00:08:58.688 "superblock": false, 00:08:58.688 "num_base_bdevs": 2, 00:08:58.688 "num_base_bdevs_discovered": 1, 00:08:58.688 "num_base_bdevs_operational": 2, 00:08:58.688 "base_bdevs_list": [ 00:08:58.688 { 00:08:58.688 "name": "BaseBdev1", 00:08:58.688 "uuid": "1b2fbcf4-de7c-46a9-b8fc-d64247c7e10a", 00:08:58.688 "is_configured": true, 00:08:58.688 "data_offset": 0, 00:08:58.688 "data_size": 65536 00:08:58.688 }, 00:08:58.688 { 00:08:58.688 "name": "BaseBdev2", 00:08:58.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.688 "is_configured": false, 00:08:58.688 "data_offset": 0, 00:08:58.688 "data_size": 0 00:08:58.688 } 00:08:58.688 ] 00:08:58.688 }' 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.688 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.258 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:59.258 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.258 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.258 [2024-11-27 04:25:55.610611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.258 [2024-11-27 04:25:55.610823] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:59.258 [2024-11-27 04:25:55.610862] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:59.258 [2024-11-27 04:25:55.611291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:59.258 [2024-11-27 04:25:55.611603] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:59.258 [2024-11-27 04:25:55.611657] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:59.258 [2024-11-27 04:25:55.612059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.258 BaseBdev2 00:08:59.258 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.258 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:59.258 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:59.258 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.258 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:59.258 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.258 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.258 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.258 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.258 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.258 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.258 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:59.258 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.259 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.259 [ 00:08:59.259 { 00:08:59.259 "name": "BaseBdev2", 00:08:59.259 "aliases": [ 00:08:59.259 "05c6047d-db93-4ad7-8d5b-d0b58ff63a3a" 00:08:59.259 ], 00:08:59.259 "product_name": "Malloc disk", 00:08:59.259 "block_size": 512, 00:08:59.259 "num_blocks": 65536, 00:08:59.259 "uuid": "05c6047d-db93-4ad7-8d5b-d0b58ff63a3a", 00:08:59.259 "assigned_rate_limits": { 00:08:59.259 "rw_ios_per_sec": 0, 00:08:59.259 "rw_mbytes_per_sec": 0, 00:08:59.259 "r_mbytes_per_sec": 0, 00:08:59.259 "w_mbytes_per_sec": 0 00:08:59.259 }, 00:08:59.259 "claimed": true, 00:08:59.259 "claim_type": "exclusive_write", 00:08:59.259 "zoned": false, 00:08:59.259 "supported_io_types": { 00:08:59.259 "read": true, 00:08:59.259 "write": true, 00:08:59.259 "unmap": true, 00:08:59.259 "flush": true, 00:08:59.259 "reset": true, 00:08:59.259 "nvme_admin": false, 00:08:59.259 "nvme_io": false, 00:08:59.259 "nvme_io_md": false, 00:08:59.259 "write_zeroes": true, 00:08:59.259 "zcopy": true, 00:08:59.259 "get_zone_info": false, 00:08:59.259 "zone_management": false, 00:08:59.259 "zone_append": false, 00:08:59.259 "compare": false, 00:08:59.259 "compare_and_write": false, 00:08:59.259 "abort": true, 00:08:59.259 "seek_hole": false, 00:08:59.259 "seek_data": false, 00:08:59.259 "copy": true, 00:08:59.259 "nvme_iov_md": false 00:08:59.259 }, 00:08:59.259 "memory_domains": [ 00:08:59.259 { 00:08:59.259 "dma_device_id": "system", 00:08:59.259 "dma_device_type": 1 00:08:59.259 }, 00:08:59.259 { 00:08:59.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.259 "dma_device_type": 2 00:08:59.259 } 00:08:59.259 ], 00:08:59.259 "driver_specific": {} 00:08:59.259 } 00:08:59.259 ] 00:08:59.259 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.259 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:59.259 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:59.259 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:59.259 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:59.259 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.259 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.259 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.259 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.259 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:59.259 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.259 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.259 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.259 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.259 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.259 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.259 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.259 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.259 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.259 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.259 "name": "Existed_Raid", 00:08:59.259 "uuid": "40b8c90c-6f7f-4261-a1f1-a1aef7259ce3", 00:08:59.259 "strip_size_kb": 64, 00:08:59.259 "state": "online", 00:08:59.259 "raid_level": "raid0", 00:08:59.259 "superblock": false, 00:08:59.259 "num_base_bdevs": 2, 00:08:59.259 "num_base_bdevs_discovered": 2, 00:08:59.259 "num_base_bdevs_operational": 2, 00:08:59.259 "base_bdevs_list": [ 00:08:59.259 { 00:08:59.259 "name": "BaseBdev1", 00:08:59.259 "uuid": "1b2fbcf4-de7c-46a9-b8fc-d64247c7e10a", 00:08:59.259 "is_configured": true, 00:08:59.259 "data_offset": 0, 00:08:59.259 "data_size": 65536 00:08:59.259 }, 00:08:59.259 { 00:08:59.259 "name": "BaseBdev2", 00:08:59.259 "uuid": "05c6047d-db93-4ad7-8d5b-d0b58ff63a3a", 00:08:59.259 "is_configured": true, 00:08:59.259 "data_offset": 0, 00:08:59.259 "data_size": 65536 00:08:59.259 } 00:08:59.259 ] 00:08:59.259 }' 00:08:59.259 04:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.259 04:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.519 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:59.519 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:59.519 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:59.519 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:59.519 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:59.519 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:59.519 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:59.519 04:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.519 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:59.519 04:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.519 [2024-11-27 04:25:56.090320] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.778 04:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.778 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:59.778 "name": "Existed_Raid", 00:08:59.778 "aliases": [ 00:08:59.778 "40b8c90c-6f7f-4261-a1f1-a1aef7259ce3" 00:08:59.779 ], 00:08:59.779 "product_name": "Raid Volume", 00:08:59.779 "block_size": 512, 00:08:59.779 "num_blocks": 131072, 00:08:59.779 "uuid": "40b8c90c-6f7f-4261-a1f1-a1aef7259ce3", 00:08:59.779 "assigned_rate_limits": { 00:08:59.779 "rw_ios_per_sec": 0, 00:08:59.779 "rw_mbytes_per_sec": 0, 00:08:59.779 "r_mbytes_per_sec": 0, 00:08:59.779 "w_mbytes_per_sec": 0 00:08:59.779 }, 00:08:59.779 "claimed": false, 00:08:59.779 "zoned": false, 00:08:59.779 "supported_io_types": { 00:08:59.779 "read": true, 00:08:59.779 "write": true, 00:08:59.779 "unmap": true, 00:08:59.779 "flush": true, 00:08:59.779 "reset": true, 00:08:59.779 "nvme_admin": false, 00:08:59.779 "nvme_io": false, 00:08:59.779 "nvme_io_md": false, 00:08:59.779 "write_zeroes": true, 00:08:59.779 "zcopy": false, 00:08:59.779 "get_zone_info": false, 00:08:59.779 "zone_management": false, 00:08:59.779 "zone_append": false, 00:08:59.779 "compare": false, 00:08:59.779 "compare_and_write": false, 00:08:59.779 "abort": false, 00:08:59.779 "seek_hole": false, 00:08:59.779 "seek_data": false, 00:08:59.779 "copy": false, 00:08:59.779 "nvme_iov_md": false 00:08:59.779 }, 00:08:59.779 "memory_domains": [ 00:08:59.779 { 00:08:59.779 "dma_device_id": "system", 00:08:59.779 "dma_device_type": 1 00:08:59.779 }, 00:08:59.779 { 00:08:59.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.779 "dma_device_type": 2 00:08:59.779 }, 00:08:59.779 { 00:08:59.779 "dma_device_id": "system", 00:08:59.779 "dma_device_type": 1 00:08:59.779 }, 00:08:59.779 { 00:08:59.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.779 "dma_device_type": 2 00:08:59.779 } 00:08:59.779 ], 00:08:59.779 "driver_specific": { 00:08:59.779 "raid": { 00:08:59.779 "uuid": "40b8c90c-6f7f-4261-a1f1-a1aef7259ce3", 00:08:59.779 "strip_size_kb": 64, 00:08:59.779 "state": "online", 00:08:59.779 "raid_level": "raid0", 00:08:59.779 "superblock": false, 00:08:59.779 "num_base_bdevs": 2, 00:08:59.779 "num_base_bdevs_discovered": 2, 00:08:59.779 "num_base_bdevs_operational": 2, 00:08:59.779 "base_bdevs_list": [ 00:08:59.779 { 00:08:59.779 "name": "BaseBdev1", 00:08:59.779 "uuid": "1b2fbcf4-de7c-46a9-b8fc-d64247c7e10a", 00:08:59.779 "is_configured": true, 00:08:59.779 "data_offset": 0, 00:08:59.779 "data_size": 65536 00:08:59.779 }, 00:08:59.779 { 00:08:59.779 "name": "BaseBdev2", 00:08:59.779 "uuid": "05c6047d-db93-4ad7-8d5b-d0b58ff63a3a", 00:08:59.779 "is_configured": true, 00:08:59.779 "data_offset": 0, 00:08:59.779 "data_size": 65536 00:08:59.779 } 00:08:59.779 ] 00:08:59.779 } 00:08:59.779 } 00:08:59.779 }' 00:08:59.779 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:59.779 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:59.779 BaseBdev2' 00:08:59.779 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.779 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:59.779 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.779 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:59.779 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.779 04:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.779 04:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.779 04:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.779 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.779 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.779 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.779 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:59.779 04:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.779 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.779 04:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.779 04:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.779 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.779 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.779 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:59.779 04:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.779 04:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.779 [2024-11-27 04:25:56.313628] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:59.779 [2024-11-27 04:25:56.313689] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:59.779 [2024-11-27 04:25:56.313758] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.038 04:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.038 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:00.038 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:00.038 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:00.038 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:00.038 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:00.038 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:00.038 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.038 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:00.038 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.038 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.038 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:00.038 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.038 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.038 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.038 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.038 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.038 04:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.038 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.038 04:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.038 04:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.038 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.038 "name": "Existed_Raid", 00:09:00.038 "uuid": "40b8c90c-6f7f-4261-a1f1-a1aef7259ce3", 00:09:00.038 "strip_size_kb": 64, 00:09:00.038 "state": "offline", 00:09:00.038 "raid_level": "raid0", 00:09:00.038 "superblock": false, 00:09:00.038 "num_base_bdevs": 2, 00:09:00.038 "num_base_bdevs_discovered": 1, 00:09:00.038 "num_base_bdevs_operational": 1, 00:09:00.038 "base_bdevs_list": [ 00:09:00.038 { 00:09:00.038 "name": null, 00:09:00.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.038 "is_configured": false, 00:09:00.038 "data_offset": 0, 00:09:00.038 "data_size": 65536 00:09:00.038 }, 00:09:00.038 { 00:09:00.038 "name": "BaseBdev2", 00:09:00.038 "uuid": "05c6047d-db93-4ad7-8d5b-d0b58ff63a3a", 00:09:00.038 "is_configured": true, 00:09:00.038 "data_offset": 0, 00:09:00.038 "data_size": 65536 00:09:00.038 } 00:09:00.038 ] 00:09:00.038 }' 00:09:00.038 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.038 04:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.297 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:00.297 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:00.297 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.298 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:00.298 04:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.298 04:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.557 04:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.557 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:00.557 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:00.557 04:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:00.557 04:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.557 04:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.557 [2024-11-27 04:25:56.928260] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:00.557 [2024-11-27 04:25:56.928336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:00.557 04:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.557 04:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:00.557 04:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:00.557 04:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.557 04:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:00.557 04:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.557 04:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.558 04:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.558 04:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:00.558 04:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:00.558 04:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:00.558 04:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60851 00:09:00.558 04:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60851 ']' 00:09:00.558 04:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60851 00:09:00.558 04:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:00.558 04:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.558 04:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60851 00:09:00.558 killing process with pid 60851 00:09:00.558 04:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:00.558 04:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:00.558 04:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60851' 00:09:00.558 04:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60851 00:09:00.558 [2024-11-27 04:25:57.118173] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:00.558 04:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60851 00:09:00.558 [2024-11-27 04:25:57.137330] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:01.932 04:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:01.932 00:09:01.932 real 0m5.265s 00:09:01.932 user 0m7.533s 00:09:01.932 sys 0m0.837s 00:09:01.932 04:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.932 ************************************ 00:09:01.932 END TEST raid_state_function_test 00:09:01.932 ************************************ 00:09:01.932 04:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.932 04:25:58 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:09:01.932 04:25:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:01.932 04:25:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.932 04:25:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:01.932 ************************************ 00:09:01.932 START TEST raid_state_function_test_sb 00:09:01.932 ************************************ 00:09:01.932 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:09:01.932 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:01.932 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:01.932 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:01.932 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:01.932 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:01.932 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:01.932 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:01.932 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:01.932 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:01.932 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:01.932 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:01.932 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:01.932 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:01.932 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:01.933 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:01.933 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:01.933 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:01.933 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:01.933 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:01.933 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:01.933 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:01.933 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:01.933 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:01.933 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61104 00:09:01.933 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:01.933 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61104' 00:09:01.933 Process raid pid: 61104 00:09:01.933 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61104 00:09:01.933 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61104 ']' 00:09:01.933 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.933 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.933 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.933 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.933 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.933 [2024-11-27 04:25:58.507444] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:01.933 [2024-11-27 04:25:58.508121] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.191 [2024-11-27 04:25:58.689071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.449 [2024-11-27 04:25:58.811012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.708 [2024-11-27 04:25:59.038991] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.708 [2024-11-27 04:25:59.039037] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.966 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.966 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:02.966 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:02.966 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.966 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.966 [2024-11-27 04:25:59.416310] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:02.966 [2024-11-27 04:25:59.416371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:02.966 [2024-11-27 04:25:59.416382] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:02.966 [2024-11-27 04:25:59.416392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:02.966 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.966 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:02.966 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.966 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.966 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.966 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.966 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:02.966 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.966 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.966 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.966 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.966 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.966 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.966 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.966 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.966 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.966 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.966 "name": "Existed_Raid", 00:09:02.966 "uuid": "bc5011c2-a363-42c4-8856-eb815b15837a", 00:09:02.966 "strip_size_kb": 64, 00:09:02.966 "state": "configuring", 00:09:02.966 "raid_level": "raid0", 00:09:02.966 "superblock": true, 00:09:02.966 "num_base_bdevs": 2, 00:09:02.966 "num_base_bdevs_discovered": 0, 00:09:02.967 "num_base_bdevs_operational": 2, 00:09:02.967 "base_bdevs_list": [ 00:09:02.967 { 00:09:02.967 "name": "BaseBdev1", 00:09:02.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.967 "is_configured": false, 00:09:02.967 "data_offset": 0, 00:09:02.967 "data_size": 0 00:09:02.967 }, 00:09:02.967 { 00:09:02.967 "name": "BaseBdev2", 00:09:02.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.967 "is_configured": false, 00:09:02.967 "data_offset": 0, 00:09:02.967 "data_size": 0 00:09:02.967 } 00:09:02.967 ] 00:09:02.967 }' 00:09:02.967 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.967 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.279 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:03.279 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.279 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.279 [2024-11-27 04:25:59.835536] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:03.279 [2024-11-27 04:25:59.835651] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:03.279 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.279 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:03.279 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.279 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.279 [2024-11-27 04:25:59.843520] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:03.279 [2024-11-27 04:25:59.843620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:03.279 [2024-11-27 04:25:59.843656] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:03.279 [2024-11-27 04:25:59.843692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:03.565 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.565 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:03.565 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.565 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.565 [2024-11-27 04:25:59.887628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.565 BaseBdev1 00:09:03.565 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.565 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:03.565 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:03.565 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:03.565 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.566 [ 00:09:03.566 { 00:09:03.566 "name": "BaseBdev1", 00:09:03.566 "aliases": [ 00:09:03.566 "5e01b040-ff8a-4c5d-b7b7-ab16777276d5" 00:09:03.566 ], 00:09:03.566 "product_name": "Malloc disk", 00:09:03.566 "block_size": 512, 00:09:03.566 "num_blocks": 65536, 00:09:03.566 "uuid": "5e01b040-ff8a-4c5d-b7b7-ab16777276d5", 00:09:03.566 "assigned_rate_limits": { 00:09:03.566 "rw_ios_per_sec": 0, 00:09:03.566 "rw_mbytes_per_sec": 0, 00:09:03.566 "r_mbytes_per_sec": 0, 00:09:03.566 "w_mbytes_per_sec": 0 00:09:03.566 }, 00:09:03.566 "claimed": true, 00:09:03.566 "claim_type": "exclusive_write", 00:09:03.566 "zoned": false, 00:09:03.566 "supported_io_types": { 00:09:03.566 "read": true, 00:09:03.566 "write": true, 00:09:03.566 "unmap": true, 00:09:03.566 "flush": true, 00:09:03.566 "reset": true, 00:09:03.566 "nvme_admin": false, 00:09:03.566 "nvme_io": false, 00:09:03.566 "nvme_io_md": false, 00:09:03.566 "write_zeroes": true, 00:09:03.566 "zcopy": true, 00:09:03.566 "get_zone_info": false, 00:09:03.566 "zone_management": false, 00:09:03.566 "zone_append": false, 00:09:03.566 "compare": false, 00:09:03.566 "compare_and_write": false, 00:09:03.566 "abort": true, 00:09:03.566 "seek_hole": false, 00:09:03.566 "seek_data": false, 00:09:03.566 "copy": true, 00:09:03.566 "nvme_iov_md": false 00:09:03.566 }, 00:09:03.566 "memory_domains": [ 00:09:03.566 { 00:09:03.566 "dma_device_id": "system", 00:09:03.566 "dma_device_type": 1 00:09:03.566 }, 00:09:03.566 { 00:09:03.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.566 "dma_device_type": 2 00:09:03.566 } 00:09:03.566 ], 00:09:03.566 "driver_specific": {} 00:09:03.566 } 00:09:03.566 ] 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.566 "name": "Existed_Raid", 00:09:03.566 "uuid": "9b58c366-a955-4988-a730-776d819cf577", 00:09:03.566 "strip_size_kb": 64, 00:09:03.566 "state": "configuring", 00:09:03.566 "raid_level": "raid0", 00:09:03.566 "superblock": true, 00:09:03.566 "num_base_bdevs": 2, 00:09:03.566 "num_base_bdevs_discovered": 1, 00:09:03.566 "num_base_bdevs_operational": 2, 00:09:03.566 "base_bdevs_list": [ 00:09:03.566 { 00:09:03.566 "name": "BaseBdev1", 00:09:03.566 "uuid": "5e01b040-ff8a-4c5d-b7b7-ab16777276d5", 00:09:03.566 "is_configured": true, 00:09:03.566 "data_offset": 2048, 00:09:03.566 "data_size": 63488 00:09:03.566 }, 00:09:03.566 { 00:09:03.566 "name": "BaseBdev2", 00:09:03.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.566 "is_configured": false, 00:09:03.566 "data_offset": 0, 00:09:03.566 "data_size": 0 00:09:03.566 } 00:09:03.566 ] 00:09:03.566 }' 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.566 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.824 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:03.824 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.824 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.824 [2024-11-27 04:26:00.406834] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:03.824 [2024-11-27 04:26:00.406972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:04.083 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.083 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:04.083 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.083 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.083 [2024-11-27 04:26:00.418887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:04.083 [2024-11-27 04:26:00.420926] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:04.083 [2024-11-27 04:26:00.421029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:04.083 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.083 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:04.083 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:04.083 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:04.083 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.083 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.083 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.083 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.083 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:04.083 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.083 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.083 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.083 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.083 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.083 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.083 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.083 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.083 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.083 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.083 "name": "Existed_Raid", 00:09:04.083 "uuid": "1d541b54-0926-454e-839f-6f976a462104", 00:09:04.083 "strip_size_kb": 64, 00:09:04.083 "state": "configuring", 00:09:04.083 "raid_level": "raid0", 00:09:04.083 "superblock": true, 00:09:04.083 "num_base_bdevs": 2, 00:09:04.083 "num_base_bdevs_discovered": 1, 00:09:04.083 "num_base_bdevs_operational": 2, 00:09:04.083 "base_bdevs_list": [ 00:09:04.083 { 00:09:04.083 "name": "BaseBdev1", 00:09:04.083 "uuid": "5e01b040-ff8a-4c5d-b7b7-ab16777276d5", 00:09:04.083 "is_configured": true, 00:09:04.083 "data_offset": 2048, 00:09:04.083 "data_size": 63488 00:09:04.083 }, 00:09:04.083 { 00:09:04.083 "name": "BaseBdev2", 00:09:04.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.083 "is_configured": false, 00:09:04.083 "data_offset": 0, 00:09:04.083 "data_size": 0 00:09:04.083 } 00:09:04.083 ] 00:09:04.083 }' 00:09:04.083 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.083 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.343 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:04.343 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.343 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.602 [2024-11-27 04:26:00.954921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:04.602 [2024-11-27 04:26:00.955317] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:04.602 [2024-11-27 04:26:00.955377] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:04.602 [2024-11-27 04:26:00.955653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:04.602 BaseBdev2 00:09:04.602 [2024-11-27 04:26:00.955841] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:04.602 [2024-11-27 04:26:00.955858] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:04.602 [2024-11-27 04:26:00.956005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.602 [ 00:09:04.602 { 00:09:04.602 "name": "BaseBdev2", 00:09:04.602 "aliases": [ 00:09:04.602 "26b07d65-f528-45a0-b912-3924f8d001e7" 00:09:04.602 ], 00:09:04.602 "product_name": "Malloc disk", 00:09:04.602 "block_size": 512, 00:09:04.602 "num_blocks": 65536, 00:09:04.602 "uuid": "26b07d65-f528-45a0-b912-3924f8d001e7", 00:09:04.602 "assigned_rate_limits": { 00:09:04.602 "rw_ios_per_sec": 0, 00:09:04.602 "rw_mbytes_per_sec": 0, 00:09:04.602 "r_mbytes_per_sec": 0, 00:09:04.602 "w_mbytes_per_sec": 0 00:09:04.602 }, 00:09:04.602 "claimed": true, 00:09:04.602 "claim_type": "exclusive_write", 00:09:04.602 "zoned": false, 00:09:04.602 "supported_io_types": { 00:09:04.602 "read": true, 00:09:04.602 "write": true, 00:09:04.602 "unmap": true, 00:09:04.602 "flush": true, 00:09:04.602 "reset": true, 00:09:04.602 "nvme_admin": false, 00:09:04.602 "nvme_io": false, 00:09:04.602 "nvme_io_md": false, 00:09:04.602 "write_zeroes": true, 00:09:04.602 "zcopy": true, 00:09:04.602 "get_zone_info": false, 00:09:04.602 "zone_management": false, 00:09:04.602 "zone_append": false, 00:09:04.602 "compare": false, 00:09:04.602 "compare_and_write": false, 00:09:04.602 "abort": true, 00:09:04.602 "seek_hole": false, 00:09:04.602 "seek_data": false, 00:09:04.602 "copy": true, 00:09:04.602 "nvme_iov_md": false 00:09:04.602 }, 00:09:04.602 "memory_domains": [ 00:09:04.602 { 00:09:04.602 "dma_device_id": "system", 00:09:04.602 "dma_device_type": 1 00:09:04.602 }, 00:09:04.602 { 00:09:04.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.602 "dma_device_type": 2 00:09:04.602 } 00:09:04.602 ], 00:09:04.602 "driver_specific": {} 00:09:04.602 } 00:09:04.602 ] 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.602 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.602 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.602 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.602 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.602 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.602 "name": "Existed_Raid", 00:09:04.602 "uuid": "1d541b54-0926-454e-839f-6f976a462104", 00:09:04.602 "strip_size_kb": 64, 00:09:04.602 "state": "online", 00:09:04.602 "raid_level": "raid0", 00:09:04.602 "superblock": true, 00:09:04.602 "num_base_bdevs": 2, 00:09:04.603 "num_base_bdevs_discovered": 2, 00:09:04.603 "num_base_bdevs_operational": 2, 00:09:04.603 "base_bdevs_list": [ 00:09:04.603 { 00:09:04.603 "name": "BaseBdev1", 00:09:04.603 "uuid": "5e01b040-ff8a-4c5d-b7b7-ab16777276d5", 00:09:04.603 "is_configured": true, 00:09:04.603 "data_offset": 2048, 00:09:04.603 "data_size": 63488 00:09:04.603 }, 00:09:04.603 { 00:09:04.603 "name": "BaseBdev2", 00:09:04.603 "uuid": "26b07d65-f528-45a0-b912-3924f8d001e7", 00:09:04.603 "is_configured": true, 00:09:04.603 "data_offset": 2048, 00:09:04.603 "data_size": 63488 00:09:04.603 } 00:09:04.603 ] 00:09:04.603 }' 00:09:04.603 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.603 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.860 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:04.860 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:04.860 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:04.860 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:04.860 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:04.860 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:04.860 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:04.860 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:04.860 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.860 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.860 [2024-11-27 04:26:01.442455] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.118 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.118 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:05.118 "name": "Existed_Raid", 00:09:05.118 "aliases": [ 00:09:05.118 "1d541b54-0926-454e-839f-6f976a462104" 00:09:05.118 ], 00:09:05.118 "product_name": "Raid Volume", 00:09:05.118 "block_size": 512, 00:09:05.118 "num_blocks": 126976, 00:09:05.118 "uuid": "1d541b54-0926-454e-839f-6f976a462104", 00:09:05.118 "assigned_rate_limits": { 00:09:05.118 "rw_ios_per_sec": 0, 00:09:05.118 "rw_mbytes_per_sec": 0, 00:09:05.118 "r_mbytes_per_sec": 0, 00:09:05.118 "w_mbytes_per_sec": 0 00:09:05.118 }, 00:09:05.118 "claimed": false, 00:09:05.118 "zoned": false, 00:09:05.118 "supported_io_types": { 00:09:05.118 "read": true, 00:09:05.118 "write": true, 00:09:05.118 "unmap": true, 00:09:05.118 "flush": true, 00:09:05.118 "reset": true, 00:09:05.118 "nvme_admin": false, 00:09:05.118 "nvme_io": false, 00:09:05.118 "nvme_io_md": false, 00:09:05.118 "write_zeroes": true, 00:09:05.118 "zcopy": false, 00:09:05.118 "get_zone_info": false, 00:09:05.118 "zone_management": false, 00:09:05.118 "zone_append": false, 00:09:05.118 "compare": false, 00:09:05.118 "compare_and_write": false, 00:09:05.118 "abort": false, 00:09:05.118 "seek_hole": false, 00:09:05.118 "seek_data": false, 00:09:05.118 "copy": false, 00:09:05.118 "nvme_iov_md": false 00:09:05.118 }, 00:09:05.118 "memory_domains": [ 00:09:05.118 { 00:09:05.118 "dma_device_id": "system", 00:09:05.118 "dma_device_type": 1 00:09:05.118 }, 00:09:05.118 { 00:09:05.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.118 "dma_device_type": 2 00:09:05.118 }, 00:09:05.118 { 00:09:05.118 "dma_device_id": "system", 00:09:05.118 "dma_device_type": 1 00:09:05.118 }, 00:09:05.118 { 00:09:05.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.118 "dma_device_type": 2 00:09:05.118 } 00:09:05.118 ], 00:09:05.118 "driver_specific": { 00:09:05.118 "raid": { 00:09:05.118 "uuid": "1d541b54-0926-454e-839f-6f976a462104", 00:09:05.118 "strip_size_kb": 64, 00:09:05.118 "state": "online", 00:09:05.118 "raid_level": "raid0", 00:09:05.118 "superblock": true, 00:09:05.118 "num_base_bdevs": 2, 00:09:05.118 "num_base_bdevs_discovered": 2, 00:09:05.118 "num_base_bdevs_operational": 2, 00:09:05.118 "base_bdevs_list": [ 00:09:05.118 { 00:09:05.118 "name": "BaseBdev1", 00:09:05.118 "uuid": "5e01b040-ff8a-4c5d-b7b7-ab16777276d5", 00:09:05.118 "is_configured": true, 00:09:05.118 "data_offset": 2048, 00:09:05.118 "data_size": 63488 00:09:05.118 }, 00:09:05.118 { 00:09:05.118 "name": "BaseBdev2", 00:09:05.118 "uuid": "26b07d65-f528-45a0-b912-3924f8d001e7", 00:09:05.118 "is_configured": true, 00:09:05.118 "data_offset": 2048, 00:09:05.118 "data_size": 63488 00:09:05.118 } 00:09:05.118 ] 00:09:05.118 } 00:09:05.118 } 00:09:05.118 }' 00:09:05.118 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:05.118 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:05.118 BaseBdev2' 00:09:05.118 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.118 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:05.118 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.118 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:05.118 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.118 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.118 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.118 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.118 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.118 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.118 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.118 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:05.118 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.119 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.119 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.119 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.119 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.119 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.119 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:05.119 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.119 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.119 [2024-11-27 04:26:01.689781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:05.119 [2024-11-27 04:26:01.689888] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.119 [2024-11-27 04:26:01.689949] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.377 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.377 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:05.377 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:05.377 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:05.377 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:05.377 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:05.377 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:05.377 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.377 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:05.377 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.377 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.377 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:05.377 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.377 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.377 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.377 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.377 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.377 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.377 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.377 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.377 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.377 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.377 "name": "Existed_Raid", 00:09:05.377 "uuid": "1d541b54-0926-454e-839f-6f976a462104", 00:09:05.377 "strip_size_kb": 64, 00:09:05.377 "state": "offline", 00:09:05.377 "raid_level": "raid0", 00:09:05.377 "superblock": true, 00:09:05.377 "num_base_bdevs": 2, 00:09:05.377 "num_base_bdevs_discovered": 1, 00:09:05.377 "num_base_bdevs_operational": 1, 00:09:05.377 "base_bdevs_list": [ 00:09:05.377 { 00:09:05.377 "name": null, 00:09:05.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.377 "is_configured": false, 00:09:05.377 "data_offset": 0, 00:09:05.377 "data_size": 63488 00:09:05.377 }, 00:09:05.377 { 00:09:05.377 "name": "BaseBdev2", 00:09:05.377 "uuid": "26b07d65-f528-45a0-b912-3924f8d001e7", 00:09:05.377 "is_configured": true, 00:09:05.377 "data_offset": 2048, 00:09:05.377 "data_size": 63488 00:09:05.377 } 00:09:05.377 ] 00:09:05.377 }' 00:09:05.377 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.377 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.635 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:05.635 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:05.635 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.635 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.635 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.635 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:05.635 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.894 [2024-11-27 04:26:02.257769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:05.894 [2024-11-27 04:26:02.257834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61104 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61104 ']' 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61104 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61104 00:09:05.894 killing process with pid 61104 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61104' 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61104 00:09:05.894 [2024-11-27 04:26:02.471899] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:05.894 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61104 00:09:06.152 [2024-11-27 04:26:02.491921] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:07.532 04:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:07.532 00:09:07.532 real 0m5.480s 00:09:07.532 user 0m7.853s 00:09:07.532 sys 0m0.822s 00:09:07.532 04:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.532 ************************************ 00:09:07.532 END TEST raid_state_function_test_sb 00:09:07.532 ************************************ 00:09:07.532 04:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.532 04:26:03 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:09:07.532 04:26:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:07.532 04:26:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.532 04:26:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:07.532 ************************************ 00:09:07.532 START TEST raid_superblock_test 00:09:07.532 ************************************ 00:09:07.532 04:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:09:07.532 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:07.532 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:07.532 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:07.532 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:07.532 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:07.532 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:07.532 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:07.532 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:07.532 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:07.532 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:07.532 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:07.532 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:07.532 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:07.532 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:07.532 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:07.532 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:07.532 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61361 00:09:07.532 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:07.532 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61361 00:09:07.532 04:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61361 ']' 00:09:07.532 04:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.532 04:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.532 04:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.532 04:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.532 04:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.532 [2024-11-27 04:26:04.048610] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:07.532 [2024-11-27 04:26:04.048921] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61361 ] 00:09:07.791 [2024-11-27 04:26:04.235423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.791 [2024-11-27 04:26:04.358188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.050 [2024-11-27 04:26:04.577297] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.050 [2024-11-27 04:26:04.577456] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.619 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.619 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:08.619 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:08.619 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:08.619 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:08.619 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:08.619 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:08.619 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:08.619 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:08.619 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:08.619 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:08.619 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.619 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.619 malloc1 00:09:08.619 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.619 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:08.619 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.619 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.619 [2024-11-27 04:26:04.975276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:08.619 [2024-11-27 04:26:04.975353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.619 [2024-11-27 04:26:04.975382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:08.619 [2024-11-27 04:26:04.975394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.619 [2024-11-27 04:26:04.977879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.619 [2024-11-27 04:26:04.977921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:08.619 pt1 00:09:08.619 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.619 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:08.619 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:08.619 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:08.619 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:08.620 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:08.620 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:08.620 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:08.620 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:08.620 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:08.620 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.620 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.620 malloc2 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.620 [2024-11-27 04:26:05.033741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:08.620 [2024-11-27 04:26:05.033895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.620 [2024-11-27 04:26:05.033933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:08.620 [2024-11-27 04:26:05.033944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.620 [2024-11-27 04:26:05.036572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.620 [2024-11-27 04:26:05.036620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:08.620 pt2 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.620 [2024-11-27 04:26:05.045786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:08.620 [2024-11-27 04:26:05.047758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:08.620 [2024-11-27 04:26:05.048017] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:08.620 [2024-11-27 04:26:05.048037] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:08.620 [2024-11-27 04:26:05.048359] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:08.620 [2024-11-27 04:26:05.048526] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:08.620 [2024-11-27 04:26:05.048539] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:08.620 [2024-11-27 04:26:05.048721] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.620 "name": "raid_bdev1", 00:09:08.620 "uuid": "1656b4a6-3fe0-4d6a-9b02-9c7c75aef9ee", 00:09:08.620 "strip_size_kb": 64, 00:09:08.620 "state": "online", 00:09:08.620 "raid_level": "raid0", 00:09:08.620 "superblock": true, 00:09:08.620 "num_base_bdevs": 2, 00:09:08.620 "num_base_bdevs_discovered": 2, 00:09:08.620 "num_base_bdevs_operational": 2, 00:09:08.620 "base_bdevs_list": [ 00:09:08.620 { 00:09:08.620 "name": "pt1", 00:09:08.620 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:08.620 "is_configured": true, 00:09:08.620 "data_offset": 2048, 00:09:08.620 "data_size": 63488 00:09:08.620 }, 00:09:08.620 { 00:09:08.620 "name": "pt2", 00:09:08.620 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:08.620 "is_configured": true, 00:09:08.620 "data_offset": 2048, 00:09:08.620 "data_size": 63488 00:09:08.620 } 00:09:08.620 ] 00:09:08.620 }' 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.620 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.189 [2024-11-27 04:26:05.509390] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:09.189 "name": "raid_bdev1", 00:09:09.189 "aliases": [ 00:09:09.189 "1656b4a6-3fe0-4d6a-9b02-9c7c75aef9ee" 00:09:09.189 ], 00:09:09.189 "product_name": "Raid Volume", 00:09:09.189 "block_size": 512, 00:09:09.189 "num_blocks": 126976, 00:09:09.189 "uuid": "1656b4a6-3fe0-4d6a-9b02-9c7c75aef9ee", 00:09:09.189 "assigned_rate_limits": { 00:09:09.189 "rw_ios_per_sec": 0, 00:09:09.189 "rw_mbytes_per_sec": 0, 00:09:09.189 "r_mbytes_per_sec": 0, 00:09:09.189 "w_mbytes_per_sec": 0 00:09:09.189 }, 00:09:09.189 "claimed": false, 00:09:09.189 "zoned": false, 00:09:09.189 "supported_io_types": { 00:09:09.189 "read": true, 00:09:09.189 "write": true, 00:09:09.189 "unmap": true, 00:09:09.189 "flush": true, 00:09:09.189 "reset": true, 00:09:09.189 "nvme_admin": false, 00:09:09.189 "nvme_io": false, 00:09:09.189 "nvme_io_md": false, 00:09:09.189 "write_zeroes": true, 00:09:09.189 "zcopy": false, 00:09:09.189 "get_zone_info": false, 00:09:09.189 "zone_management": false, 00:09:09.189 "zone_append": false, 00:09:09.189 "compare": false, 00:09:09.189 "compare_and_write": false, 00:09:09.189 "abort": false, 00:09:09.189 "seek_hole": false, 00:09:09.189 "seek_data": false, 00:09:09.189 "copy": false, 00:09:09.189 "nvme_iov_md": false 00:09:09.189 }, 00:09:09.189 "memory_domains": [ 00:09:09.189 { 00:09:09.189 "dma_device_id": "system", 00:09:09.189 "dma_device_type": 1 00:09:09.189 }, 00:09:09.189 { 00:09:09.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.189 "dma_device_type": 2 00:09:09.189 }, 00:09:09.189 { 00:09:09.189 "dma_device_id": "system", 00:09:09.189 "dma_device_type": 1 00:09:09.189 }, 00:09:09.189 { 00:09:09.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.189 "dma_device_type": 2 00:09:09.189 } 00:09:09.189 ], 00:09:09.189 "driver_specific": { 00:09:09.189 "raid": { 00:09:09.189 "uuid": "1656b4a6-3fe0-4d6a-9b02-9c7c75aef9ee", 00:09:09.189 "strip_size_kb": 64, 00:09:09.189 "state": "online", 00:09:09.189 "raid_level": "raid0", 00:09:09.189 "superblock": true, 00:09:09.189 "num_base_bdevs": 2, 00:09:09.189 "num_base_bdevs_discovered": 2, 00:09:09.189 "num_base_bdevs_operational": 2, 00:09:09.189 "base_bdevs_list": [ 00:09:09.189 { 00:09:09.189 "name": "pt1", 00:09:09.189 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:09.189 "is_configured": true, 00:09:09.189 "data_offset": 2048, 00:09:09.189 "data_size": 63488 00:09:09.189 }, 00:09:09.189 { 00:09:09.189 "name": "pt2", 00:09:09.189 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:09.189 "is_configured": true, 00:09:09.189 "data_offset": 2048, 00:09:09.189 "data_size": 63488 00:09:09.189 } 00:09:09.189 ] 00:09:09.189 } 00:09:09.189 } 00:09:09.189 }' 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:09.189 pt2' 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:09.189 [2024-11-27 04:26:05.736991] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.189 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.449 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1656b4a6-3fe0-4d6a-9b02-9c7c75aef9ee 00:09:09.449 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1656b4a6-3fe0-4d6a-9b02-9c7c75aef9ee ']' 00:09:09.449 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:09.449 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.449 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.449 [2024-11-27 04:26:05.784567] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:09.449 [2024-11-27 04:26:05.784661] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.449 [2024-11-27 04:26:05.784800] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.449 [2024-11-27 04:26:05.784857] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:09.449 [2024-11-27 04:26:05.784871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:09.449 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.449 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.449 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.449 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:09.449 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.450 [2024-11-27 04:26:05.924392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:09.450 [2024-11-27 04:26:05.926538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:09.450 [2024-11-27 04:26:05.926616] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:09.450 [2024-11-27 04:26:05.926678] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:09.450 [2024-11-27 04:26:05.926695] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:09.450 [2024-11-27 04:26:05.926710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:09.450 request: 00:09:09.450 { 00:09:09.450 "name": "raid_bdev1", 00:09:09.450 "raid_level": "raid0", 00:09:09.450 "base_bdevs": [ 00:09:09.450 "malloc1", 00:09:09.450 "malloc2" 00:09:09.450 ], 00:09:09.450 "strip_size_kb": 64, 00:09:09.450 "superblock": false, 00:09:09.450 "method": "bdev_raid_create", 00:09:09.450 "req_id": 1 00:09:09.450 } 00:09:09.450 Got JSON-RPC error response 00:09:09.450 response: 00:09:09.450 { 00:09:09.450 "code": -17, 00:09:09.450 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:09.450 } 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.450 [2024-11-27 04:26:05.984278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:09.450 [2024-11-27 04:26:05.984445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.450 [2024-11-27 04:26:05.984488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:09.450 [2024-11-27 04:26:05.984533] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.450 [2024-11-27 04:26:05.987141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.450 [2024-11-27 04:26:05.987243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:09.450 [2024-11-27 04:26:05.987393] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:09.450 [2024-11-27 04:26:05.987510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:09.450 pt1 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.450 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.450 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.709 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.709 "name": "raid_bdev1", 00:09:09.709 "uuid": "1656b4a6-3fe0-4d6a-9b02-9c7c75aef9ee", 00:09:09.709 "strip_size_kb": 64, 00:09:09.709 "state": "configuring", 00:09:09.709 "raid_level": "raid0", 00:09:09.709 "superblock": true, 00:09:09.709 "num_base_bdevs": 2, 00:09:09.709 "num_base_bdevs_discovered": 1, 00:09:09.709 "num_base_bdevs_operational": 2, 00:09:09.709 "base_bdevs_list": [ 00:09:09.709 { 00:09:09.709 "name": "pt1", 00:09:09.709 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:09.709 "is_configured": true, 00:09:09.709 "data_offset": 2048, 00:09:09.709 "data_size": 63488 00:09:09.710 }, 00:09:09.710 { 00:09:09.710 "name": null, 00:09:09.710 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:09.710 "is_configured": false, 00:09:09.710 "data_offset": 2048, 00:09:09.710 "data_size": 63488 00:09:09.710 } 00:09:09.710 ] 00:09:09.710 }' 00:09:09.710 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.710 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.969 [2024-11-27 04:26:06.451502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:09.969 [2024-11-27 04:26:06.451590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.969 [2024-11-27 04:26:06.451615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:09.969 [2024-11-27 04:26:06.451627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.969 [2024-11-27 04:26:06.452129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.969 [2024-11-27 04:26:06.452190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:09.969 [2024-11-27 04:26:06.452289] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:09.969 [2024-11-27 04:26:06.452318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:09.969 [2024-11-27 04:26:06.452449] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:09.969 [2024-11-27 04:26:06.452468] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:09.969 [2024-11-27 04:26:06.452734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:09.969 [2024-11-27 04:26:06.452881] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:09.969 [2024-11-27 04:26:06.452897] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:09.969 [2024-11-27 04:26:06.453047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.969 pt2 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.969 "name": "raid_bdev1", 00:09:09.969 "uuid": "1656b4a6-3fe0-4d6a-9b02-9c7c75aef9ee", 00:09:09.969 "strip_size_kb": 64, 00:09:09.969 "state": "online", 00:09:09.969 "raid_level": "raid0", 00:09:09.969 "superblock": true, 00:09:09.969 "num_base_bdevs": 2, 00:09:09.969 "num_base_bdevs_discovered": 2, 00:09:09.969 "num_base_bdevs_operational": 2, 00:09:09.969 "base_bdevs_list": [ 00:09:09.969 { 00:09:09.969 "name": "pt1", 00:09:09.969 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:09.969 "is_configured": true, 00:09:09.969 "data_offset": 2048, 00:09:09.969 "data_size": 63488 00:09:09.969 }, 00:09:09.969 { 00:09:09.969 "name": "pt2", 00:09:09.969 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:09.969 "is_configured": true, 00:09:09.969 "data_offset": 2048, 00:09:09.969 "data_size": 63488 00:09:09.969 } 00:09:09.969 ] 00:09:09.969 }' 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.969 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.538 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:10.538 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:10.538 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:10.538 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:10.538 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:10.538 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:10.538 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:10.538 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:10.538 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.538 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.538 [2024-11-27 04:26:06.918989] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:10.538 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.538 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:10.538 "name": "raid_bdev1", 00:09:10.538 "aliases": [ 00:09:10.538 "1656b4a6-3fe0-4d6a-9b02-9c7c75aef9ee" 00:09:10.538 ], 00:09:10.538 "product_name": "Raid Volume", 00:09:10.538 "block_size": 512, 00:09:10.538 "num_blocks": 126976, 00:09:10.538 "uuid": "1656b4a6-3fe0-4d6a-9b02-9c7c75aef9ee", 00:09:10.538 "assigned_rate_limits": { 00:09:10.538 "rw_ios_per_sec": 0, 00:09:10.538 "rw_mbytes_per_sec": 0, 00:09:10.538 "r_mbytes_per_sec": 0, 00:09:10.538 "w_mbytes_per_sec": 0 00:09:10.538 }, 00:09:10.538 "claimed": false, 00:09:10.538 "zoned": false, 00:09:10.538 "supported_io_types": { 00:09:10.538 "read": true, 00:09:10.538 "write": true, 00:09:10.538 "unmap": true, 00:09:10.538 "flush": true, 00:09:10.538 "reset": true, 00:09:10.538 "nvme_admin": false, 00:09:10.538 "nvme_io": false, 00:09:10.538 "nvme_io_md": false, 00:09:10.538 "write_zeroes": true, 00:09:10.538 "zcopy": false, 00:09:10.538 "get_zone_info": false, 00:09:10.538 "zone_management": false, 00:09:10.538 "zone_append": false, 00:09:10.538 "compare": false, 00:09:10.538 "compare_and_write": false, 00:09:10.538 "abort": false, 00:09:10.538 "seek_hole": false, 00:09:10.538 "seek_data": false, 00:09:10.538 "copy": false, 00:09:10.538 "nvme_iov_md": false 00:09:10.538 }, 00:09:10.538 "memory_domains": [ 00:09:10.538 { 00:09:10.538 "dma_device_id": "system", 00:09:10.538 "dma_device_type": 1 00:09:10.538 }, 00:09:10.538 { 00:09:10.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.538 "dma_device_type": 2 00:09:10.538 }, 00:09:10.538 { 00:09:10.538 "dma_device_id": "system", 00:09:10.538 "dma_device_type": 1 00:09:10.538 }, 00:09:10.538 { 00:09:10.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.538 "dma_device_type": 2 00:09:10.538 } 00:09:10.538 ], 00:09:10.538 "driver_specific": { 00:09:10.538 "raid": { 00:09:10.538 "uuid": "1656b4a6-3fe0-4d6a-9b02-9c7c75aef9ee", 00:09:10.538 "strip_size_kb": 64, 00:09:10.538 "state": "online", 00:09:10.538 "raid_level": "raid0", 00:09:10.538 "superblock": true, 00:09:10.538 "num_base_bdevs": 2, 00:09:10.538 "num_base_bdevs_discovered": 2, 00:09:10.538 "num_base_bdevs_operational": 2, 00:09:10.538 "base_bdevs_list": [ 00:09:10.538 { 00:09:10.538 "name": "pt1", 00:09:10.538 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:10.538 "is_configured": true, 00:09:10.538 "data_offset": 2048, 00:09:10.538 "data_size": 63488 00:09:10.538 }, 00:09:10.538 { 00:09:10.538 "name": "pt2", 00:09:10.538 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:10.538 "is_configured": true, 00:09:10.538 "data_offset": 2048, 00:09:10.538 "data_size": 63488 00:09:10.538 } 00:09:10.538 ] 00:09:10.538 } 00:09:10.538 } 00:09:10.538 }' 00:09:10.538 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:10.538 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:10.538 pt2' 00:09:10.538 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.538 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:10.538 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:10.538 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:10.538 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.538 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.538 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.538 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.538 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:10.538 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:10.538 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:10.539 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:10.539 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.539 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.539 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.539 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.797 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:10.797 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:10.797 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:10.797 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:10.797 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.797 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.797 [2024-11-27 04:26:07.138645] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:10.797 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.797 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1656b4a6-3fe0-4d6a-9b02-9c7c75aef9ee '!=' 1656b4a6-3fe0-4d6a-9b02-9c7c75aef9ee ']' 00:09:10.797 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:10.797 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:10.797 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:10.797 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61361 00:09:10.797 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61361 ']' 00:09:10.797 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61361 00:09:10.797 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:10.797 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.797 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61361 00:09:10.797 killing process with pid 61361 00:09:10.797 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.797 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.797 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61361' 00:09:10.797 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61361 00:09:10.797 [2024-11-27 04:26:07.204523] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:10.797 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61361 00:09:10.797 [2024-11-27 04:26:07.204628] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:10.797 [2024-11-27 04:26:07.204681] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:10.797 [2024-11-27 04:26:07.204693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:11.056 [2024-11-27 04:26:07.449654] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:12.436 ************************************ 00:09:12.436 END TEST raid_superblock_test 00:09:12.436 ************************************ 00:09:12.436 04:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:12.436 00:09:12.436 real 0m4.724s 00:09:12.436 user 0m6.613s 00:09:12.436 sys 0m0.765s 00:09:12.436 04:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.436 04:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.436 04:26:08 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:09:12.436 04:26:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:12.436 04:26:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.436 04:26:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:12.436 ************************************ 00:09:12.436 START TEST raid_read_error_test 00:09:12.436 ************************************ 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.r60GuKKgVU 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61573 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61573 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61573 ']' 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:12.436 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.437 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.437 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.437 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.437 [2024-11-27 04:26:08.837848] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:12.437 [2024-11-27 04:26:08.838070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61573 ] 00:09:12.437 [2024-11-27 04:26:09.017728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.697 [2024-11-27 04:26:09.134834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.957 [2024-11-27 04:26:09.340997] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.957 [2024-11-27 04:26:09.341150] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.215 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.215 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:13.215 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:13.215 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:13.215 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.215 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.215 BaseBdev1_malloc 00:09:13.215 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.215 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:13.215 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.215 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.215 true 00:09:13.215 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.215 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:13.215 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.215 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.215 [2024-11-27 04:26:09.778896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:13.215 [2024-11-27 04:26:09.779045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.215 [2024-11-27 04:26:09.779079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:13.215 [2024-11-27 04:26:09.779103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.215 [2024-11-27 04:26:09.781669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.215 [2024-11-27 04:26:09.781719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:13.215 BaseBdev1 00:09:13.215 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.215 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:13.215 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:13.215 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.215 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.475 BaseBdev2_malloc 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.475 true 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.475 [2024-11-27 04:26:09.846269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:13.475 [2024-11-27 04:26:09.846400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.475 [2024-11-27 04:26:09.846443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:13.475 [2024-11-27 04:26:09.846456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.475 [2024-11-27 04:26:09.849116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.475 [2024-11-27 04:26:09.849168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:13.475 BaseBdev2 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.475 [2024-11-27 04:26:09.858325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.475 [2024-11-27 04:26:09.860438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:13.475 [2024-11-27 04:26:09.860812] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:13.475 [2024-11-27 04:26:09.860843] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:13.475 [2024-11-27 04:26:09.861208] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:13.475 [2024-11-27 04:26:09.861418] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:13.475 [2024-11-27 04:26:09.861435] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:13.475 [2024-11-27 04:26:09.861629] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.475 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.475 "name": "raid_bdev1", 00:09:13.475 "uuid": "d3bc6a47-4722-45c3-8403-44e7e91a3c77", 00:09:13.475 "strip_size_kb": 64, 00:09:13.475 "state": "online", 00:09:13.475 "raid_level": "raid0", 00:09:13.475 "superblock": true, 00:09:13.475 "num_base_bdevs": 2, 00:09:13.475 "num_base_bdevs_discovered": 2, 00:09:13.475 "num_base_bdevs_operational": 2, 00:09:13.475 "base_bdevs_list": [ 00:09:13.475 { 00:09:13.475 "name": "BaseBdev1", 00:09:13.475 "uuid": "06bbd644-4f8e-5d99-9db3-b04543e0e7c0", 00:09:13.475 "is_configured": true, 00:09:13.475 "data_offset": 2048, 00:09:13.475 "data_size": 63488 00:09:13.475 }, 00:09:13.475 { 00:09:13.475 "name": "BaseBdev2", 00:09:13.475 "uuid": "7b815cfb-3d47-5f77-b16e-bf964a7cd03b", 00:09:13.475 "is_configured": true, 00:09:13.475 "data_offset": 2048, 00:09:13.475 "data_size": 63488 00:09:13.475 } 00:09:13.475 ] 00:09:13.475 }' 00:09:13.476 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.476 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.734 04:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:13.734 04:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:13.992 [2024-11-27 04:26:10.390880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:14.929 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:14.929 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.929 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.929 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.929 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:14.929 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:14.929 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:14.929 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:14.929 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:14.929 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.929 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.929 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.929 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:14.929 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.929 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.929 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.929 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.929 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.929 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.929 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.929 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.929 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.929 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.929 "name": "raid_bdev1", 00:09:14.929 "uuid": "d3bc6a47-4722-45c3-8403-44e7e91a3c77", 00:09:14.929 "strip_size_kb": 64, 00:09:14.929 "state": "online", 00:09:14.929 "raid_level": "raid0", 00:09:14.929 "superblock": true, 00:09:14.929 "num_base_bdevs": 2, 00:09:14.929 "num_base_bdevs_discovered": 2, 00:09:14.929 "num_base_bdevs_operational": 2, 00:09:14.929 "base_bdevs_list": [ 00:09:14.929 { 00:09:14.929 "name": "BaseBdev1", 00:09:14.929 "uuid": "06bbd644-4f8e-5d99-9db3-b04543e0e7c0", 00:09:14.929 "is_configured": true, 00:09:14.929 "data_offset": 2048, 00:09:14.929 "data_size": 63488 00:09:14.929 }, 00:09:14.929 { 00:09:14.929 "name": "BaseBdev2", 00:09:14.929 "uuid": "7b815cfb-3d47-5f77-b16e-bf964a7cd03b", 00:09:14.929 "is_configured": true, 00:09:14.929 "data_offset": 2048, 00:09:14.929 "data_size": 63488 00:09:14.929 } 00:09:14.929 ] 00:09:14.929 }' 00:09:14.929 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.929 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.497 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:15.497 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.497 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.497 [2024-11-27 04:26:11.784216] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:15.497 [2024-11-27 04:26:11.784263] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:15.497 [2024-11-27 04:26:11.787459] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.497 [2024-11-27 04:26:11.787515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.497 [2024-11-27 04:26:11.787550] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:15.497 [2024-11-27 04:26:11.787564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:15.497 { 00:09:15.497 "results": [ 00:09:15.497 { 00:09:15.497 "job": "raid_bdev1", 00:09:15.497 "core_mask": "0x1", 00:09:15.497 "workload": "randrw", 00:09:15.497 "percentage": 50, 00:09:15.497 "status": "finished", 00:09:15.497 "queue_depth": 1, 00:09:15.497 "io_size": 131072, 00:09:15.497 "runtime": 1.393962, 00:09:15.497 "iops": 13455.890476210972, 00:09:15.497 "mibps": 1681.9863095263715, 00:09:15.497 "io_failed": 1, 00:09:15.497 "io_timeout": 0, 00:09:15.497 "avg_latency_us": 102.79064955575285, 00:09:15.497 "min_latency_us": 26.717903930131005, 00:09:15.497 "max_latency_us": 1874.5013100436681 00:09:15.497 } 00:09:15.497 ], 00:09:15.497 "core_count": 1 00:09:15.497 } 00:09:15.497 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.497 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61573 00:09:15.497 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61573 ']' 00:09:15.497 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61573 00:09:15.497 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:15.497 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.497 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61573 00:09:15.497 killing process with pid 61573 00:09:15.498 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:15.498 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:15.498 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61573' 00:09:15.498 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61573 00:09:15.498 [2024-11-27 04:26:11.834651] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:15.498 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61573 00:09:15.498 [2024-11-27 04:26:11.986405] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:16.875 04:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.r60GuKKgVU 00:09:16.875 04:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:16.875 04:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:16.875 04:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:16.875 04:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:16.875 04:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:16.875 04:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:16.875 ************************************ 00:09:16.875 END TEST raid_read_error_test 00:09:16.875 ************************************ 00:09:16.875 04:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:16.875 00:09:16.875 real 0m4.527s 00:09:16.875 user 0m5.424s 00:09:16.875 sys 0m0.552s 00:09:16.875 04:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.875 04:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.875 04:26:13 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:09:16.875 04:26:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:16.875 04:26:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.875 04:26:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:16.875 ************************************ 00:09:16.875 START TEST raid_write_error_test 00:09:16.875 ************************************ 00:09:16.875 04:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mIjO3VlH03 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61713 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61713 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61713 ']' 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.876 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:16.876 [2024-11-27 04:26:13.431534] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:16.876 [2024-11-27 04:26:13.431757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61713 ] 00:09:17.135 [2024-11-27 04:26:13.610595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.393 [2024-11-27 04:26:13.744027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.687 [2024-11-27 04:26:13.991324] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.687 [2024-11-27 04:26:13.991400] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.985 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.986 BaseBdev1_malloc 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.986 true 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.986 [2024-11-27 04:26:14.382742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:17.986 [2024-11-27 04:26:14.382817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.986 [2024-11-27 04:26:14.382844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:17.986 [2024-11-27 04:26:14.382856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.986 [2024-11-27 04:26:14.385469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.986 [2024-11-27 04:26:14.385591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:17.986 BaseBdev1 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.986 BaseBdev2_malloc 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.986 true 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.986 [2024-11-27 04:26:14.456839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:17.986 [2024-11-27 04:26:14.456921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.986 [2024-11-27 04:26:14.456945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:17.986 [2024-11-27 04:26:14.456960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.986 [2024-11-27 04:26:14.459737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.986 [2024-11-27 04:26:14.459863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:17.986 BaseBdev2 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.986 [2024-11-27 04:26:14.468916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:17.986 [2024-11-27 04:26:14.471272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.986 [2024-11-27 04:26:14.471540] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:17.986 [2024-11-27 04:26:14.471563] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:17.986 [2024-11-27 04:26:14.471927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:17.986 [2024-11-27 04:26:14.472204] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:17.986 [2024-11-27 04:26:14.472227] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:17.986 [2024-11-27 04:26:14.472477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.986 "name": "raid_bdev1", 00:09:17.986 "uuid": "15798c4d-1873-4d6b-889c-929184d2e886", 00:09:17.986 "strip_size_kb": 64, 00:09:17.986 "state": "online", 00:09:17.986 "raid_level": "raid0", 00:09:17.986 "superblock": true, 00:09:17.986 "num_base_bdevs": 2, 00:09:17.986 "num_base_bdevs_discovered": 2, 00:09:17.986 "num_base_bdevs_operational": 2, 00:09:17.986 "base_bdevs_list": [ 00:09:17.986 { 00:09:17.986 "name": "BaseBdev1", 00:09:17.986 "uuid": "ae321ea1-ae77-51cb-8eeb-dfec8d5312b1", 00:09:17.986 "is_configured": true, 00:09:17.986 "data_offset": 2048, 00:09:17.986 "data_size": 63488 00:09:17.986 }, 00:09:17.986 { 00:09:17.986 "name": "BaseBdev2", 00:09:17.986 "uuid": "898f26c9-4bbd-5fc0-b2c4-5c3b5984b2eb", 00:09:17.986 "is_configured": true, 00:09:17.986 "data_offset": 2048, 00:09:17.986 "data_size": 63488 00:09:17.986 } 00:09:17.986 ] 00:09:17.986 }' 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.986 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.554 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:18.554 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:18.554 [2024-11-27 04:26:15.041387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:19.489 04:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:19.489 04:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.489 04:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.489 04:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.489 04:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:19.489 04:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:19.489 04:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:19.489 04:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:19.489 04:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.489 04:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.489 04:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.489 04:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.489 04:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.489 04:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.489 04:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.489 04:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.489 04:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.489 04:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.489 04:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.489 04:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.489 04:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.489 04:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.489 04:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.489 "name": "raid_bdev1", 00:09:19.489 "uuid": "15798c4d-1873-4d6b-889c-929184d2e886", 00:09:19.489 "strip_size_kb": 64, 00:09:19.489 "state": "online", 00:09:19.489 "raid_level": "raid0", 00:09:19.489 "superblock": true, 00:09:19.489 "num_base_bdevs": 2, 00:09:19.489 "num_base_bdevs_discovered": 2, 00:09:19.489 "num_base_bdevs_operational": 2, 00:09:19.489 "base_bdevs_list": [ 00:09:19.489 { 00:09:19.489 "name": "BaseBdev1", 00:09:19.489 "uuid": "ae321ea1-ae77-51cb-8eeb-dfec8d5312b1", 00:09:19.489 "is_configured": true, 00:09:19.489 "data_offset": 2048, 00:09:19.489 "data_size": 63488 00:09:19.490 }, 00:09:19.490 { 00:09:19.490 "name": "BaseBdev2", 00:09:19.490 "uuid": "898f26c9-4bbd-5fc0-b2c4-5c3b5984b2eb", 00:09:19.490 "is_configured": true, 00:09:19.490 "data_offset": 2048, 00:09:19.490 "data_size": 63488 00:09:19.490 } 00:09:19.490 ] 00:09:19.490 }' 00:09:19.490 04:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.490 04:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.056 04:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:20.056 04:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.056 04:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.056 [2024-11-27 04:26:16.425518] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:20.056 [2024-11-27 04:26:16.425558] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.056 [2024-11-27 04:26:16.428328] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.056 [2024-11-27 04:26:16.428456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.056 [2024-11-27 04:26:16.428499] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.056 [2024-11-27 04:26:16.428513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:20.056 { 00:09:20.056 "results": [ 00:09:20.056 { 00:09:20.056 "job": "raid_bdev1", 00:09:20.056 "core_mask": "0x1", 00:09:20.056 "workload": "randrw", 00:09:20.056 "percentage": 50, 00:09:20.056 "status": "finished", 00:09:20.056 "queue_depth": 1, 00:09:20.056 "io_size": 131072, 00:09:20.056 "runtime": 1.38492, 00:09:20.056 "iops": 15031.193137509748, 00:09:20.056 "mibps": 1878.8991421887185, 00:09:20.056 "io_failed": 1, 00:09:20.056 "io_timeout": 0, 00:09:20.056 "avg_latency_us": 91.96379384484624, 00:09:20.056 "min_latency_us": 27.165065502183406, 00:09:20.056 "max_latency_us": 1395.1441048034935 00:09:20.057 } 00:09:20.057 ], 00:09:20.057 "core_count": 1 00:09:20.057 } 00:09:20.057 04:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.057 04:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61713 00:09:20.057 04:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61713 ']' 00:09:20.057 04:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61713 00:09:20.057 04:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:20.057 04:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.057 04:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61713 00:09:20.057 killing process with pid 61713 00:09:20.057 04:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.057 04:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.057 04:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61713' 00:09:20.057 04:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61713 00:09:20.057 [2024-11-27 04:26:16.476593] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:20.057 04:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61713 00:09:20.057 [2024-11-27 04:26:16.615061] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:21.438 04:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:21.438 04:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mIjO3VlH03 00:09:21.438 04:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:21.438 04:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:21.438 04:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:21.438 04:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:21.438 04:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:21.438 04:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:21.438 00:09:21.438 real 0m4.540s 00:09:21.438 user 0m5.482s 00:09:21.438 sys 0m0.575s 00:09:21.438 04:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.438 04:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.438 ************************************ 00:09:21.438 END TEST raid_write_error_test 00:09:21.438 ************************************ 00:09:21.438 04:26:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:21.438 04:26:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:09:21.438 04:26:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:21.438 04:26:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.438 04:26:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:21.438 ************************************ 00:09:21.438 START TEST raid_state_function_test 00:09:21.438 ************************************ 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:21.438 Process raid pid: 61862 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61862 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61862' 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61862 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61862 ']' 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.438 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.438 [2024-11-27 04:26:18.019361] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:21.438 [2024-11-27 04:26:18.019493] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.763 [2024-11-27 04:26:18.196438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.763 [2024-11-27 04:26:18.317177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.023 [2024-11-27 04:26:18.541396] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.023 [2024-11-27 04:26:18.541519] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.592 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.592 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:22.592 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:22.592 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.592 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.592 [2024-11-27 04:26:18.897941] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:22.592 [2024-11-27 04:26:18.898001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:22.592 [2024-11-27 04:26:18.898012] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:22.592 [2024-11-27 04:26:18.898022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:22.592 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.592 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:22.592 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.592 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.592 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.592 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.592 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:22.592 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.592 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.592 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.592 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.592 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.592 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.592 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.592 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.592 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.592 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.592 "name": "Existed_Raid", 00:09:22.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.592 "strip_size_kb": 64, 00:09:22.592 "state": "configuring", 00:09:22.592 "raid_level": "concat", 00:09:22.592 "superblock": false, 00:09:22.592 "num_base_bdevs": 2, 00:09:22.592 "num_base_bdevs_discovered": 0, 00:09:22.592 "num_base_bdevs_operational": 2, 00:09:22.592 "base_bdevs_list": [ 00:09:22.592 { 00:09:22.592 "name": "BaseBdev1", 00:09:22.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.592 "is_configured": false, 00:09:22.592 "data_offset": 0, 00:09:22.592 "data_size": 0 00:09:22.592 }, 00:09:22.592 { 00:09:22.592 "name": "BaseBdev2", 00:09:22.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.592 "is_configured": false, 00:09:22.592 "data_offset": 0, 00:09:22.592 "data_size": 0 00:09:22.592 } 00:09:22.592 ] 00:09:22.592 }' 00:09:22.592 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.592 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.854 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:22.854 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.855 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.855 [2024-11-27 04:26:19.385097] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:22.855 [2024-11-27 04:26:19.385196] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:22.855 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.855 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:22.855 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.855 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.855 [2024-11-27 04:26:19.397037] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:22.855 [2024-11-27 04:26:19.397156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:22.855 [2024-11-27 04:26:19.397189] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:22.855 [2024-11-27 04:26:19.397216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:22.855 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.855 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:22.855 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.855 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.116 [2024-11-27 04:26:19.445696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.116 BaseBdev1 00:09:23.116 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.116 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:23.116 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:23.116 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.116 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:23.116 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.116 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.116 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.116 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.116 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.116 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.116 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:23.116 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.116 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.116 [ 00:09:23.116 { 00:09:23.116 "name": "BaseBdev1", 00:09:23.116 "aliases": [ 00:09:23.116 "6bf19608-3577-47ed-ad32-ed881685a5e3" 00:09:23.116 ], 00:09:23.116 "product_name": "Malloc disk", 00:09:23.116 "block_size": 512, 00:09:23.116 "num_blocks": 65536, 00:09:23.116 "uuid": "6bf19608-3577-47ed-ad32-ed881685a5e3", 00:09:23.116 "assigned_rate_limits": { 00:09:23.116 "rw_ios_per_sec": 0, 00:09:23.116 "rw_mbytes_per_sec": 0, 00:09:23.116 "r_mbytes_per_sec": 0, 00:09:23.116 "w_mbytes_per_sec": 0 00:09:23.116 }, 00:09:23.116 "claimed": true, 00:09:23.116 "claim_type": "exclusive_write", 00:09:23.116 "zoned": false, 00:09:23.116 "supported_io_types": { 00:09:23.116 "read": true, 00:09:23.116 "write": true, 00:09:23.116 "unmap": true, 00:09:23.116 "flush": true, 00:09:23.116 "reset": true, 00:09:23.116 "nvme_admin": false, 00:09:23.116 "nvme_io": false, 00:09:23.116 "nvme_io_md": false, 00:09:23.116 "write_zeroes": true, 00:09:23.116 "zcopy": true, 00:09:23.116 "get_zone_info": false, 00:09:23.116 "zone_management": false, 00:09:23.116 "zone_append": false, 00:09:23.116 "compare": false, 00:09:23.116 "compare_and_write": false, 00:09:23.116 "abort": true, 00:09:23.116 "seek_hole": false, 00:09:23.116 "seek_data": false, 00:09:23.116 "copy": true, 00:09:23.116 "nvme_iov_md": false 00:09:23.116 }, 00:09:23.116 "memory_domains": [ 00:09:23.116 { 00:09:23.116 "dma_device_id": "system", 00:09:23.116 "dma_device_type": 1 00:09:23.116 }, 00:09:23.116 { 00:09:23.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.116 "dma_device_type": 2 00:09:23.116 } 00:09:23.116 ], 00:09:23.116 "driver_specific": {} 00:09:23.116 } 00:09:23.116 ] 00:09:23.116 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.116 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:23.116 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:23.116 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.116 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.116 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.116 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.117 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:23.117 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.117 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.117 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.117 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.117 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.117 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.117 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.117 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.117 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.117 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.117 "name": "Existed_Raid", 00:09:23.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.117 "strip_size_kb": 64, 00:09:23.117 "state": "configuring", 00:09:23.117 "raid_level": "concat", 00:09:23.117 "superblock": false, 00:09:23.117 "num_base_bdevs": 2, 00:09:23.117 "num_base_bdevs_discovered": 1, 00:09:23.117 "num_base_bdevs_operational": 2, 00:09:23.117 "base_bdevs_list": [ 00:09:23.117 { 00:09:23.117 "name": "BaseBdev1", 00:09:23.117 "uuid": "6bf19608-3577-47ed-ad32-ed881685a5e3", 00:09:23.117 "is_configured": true, 00:09:23.117 "data_offset": 0, 00:09:23.117 "data_size": 65536 00:09:23.117 }, 00:09:23.117 { 00:09:23.117 "name": "BaseBdev2", 00:09:23.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.117 "is_configured": false, 00:09:23.117 "data_offset": 0, 00:09:23.117 "data_size": 0 00:09:23.117 } 00:09:23.117 ] 00:09:23.117 }' 00:09:23.117 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.117 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.376 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:23.376 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.376 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.376 [2024-11-27 04:26:19.952907] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:23.376 [2024-11-27 04:26:19.953029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:23.376 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.376 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:23.376 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.376 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.635 [2024-11-27 04:26:19.964939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.635 [2024-11-27 04:26:19.966964] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:23.635 [2024-11-27 04:26:19.967023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:23.635 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.635 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:23.635 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:23.635 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:23.635 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.635 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.635 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.635 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.635 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:23.635 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.635 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.635 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.635 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.635 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.635 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.635 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.635 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.635 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.635 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.635 "name": "Existed_Raid", 00:09:23.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.635 "strip_size_kb": 64, 00:09:23.635 "state": "configuring", 00:09:23.635 "raid_level": "concat", 00:09:23.635 "superblock": false, 00:09:23.635 "num_base_bdevs": 2, 00:09:23.635 "num_base_bdevs_discovered": 1, 00:09:23.635 "num_base_bdevs_operational": 2, 00:09:23.635 "base_bdevs_list": [ 00:09:23.635 { 00:09:23.635 "name": "BaseBdev1", 00:09:23.635 "uuid": "6bf19608-3577-47ed-ad32-ed881685a5e3", 00:09:23.635 "is_configured": true, 00:09:23.635 "data_offset": 0, 00:09:23.635 "data_size": 65536 00:09:23.635 }, 00:09:23.635 { 00:09:23.635 "name": "BaseBdev2", 00:09:23.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.635 "is_configured": false, 00:09:23.635 "data_offset": 0, 00:09:23.635 "data_size": 0 00:09:23.635 } 00:09:23.635 ] 00:09:23.635 }' 00:09:23.635 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.635 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.894 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:23.894 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.894 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.153 [2024-11-27 04:26:20.498526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:24.153 [2024-11-27 04:26:20.498588] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:24.153 [2024-11-27 04:26:20.498596] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:24.153 [2024-11-27 04:26:20.498889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:24.153 [2024-11-27 04:26:20.499089] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:24.153 [2024-11-27 04:26:20.499257] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:24.153 [2024-11-27 04:26:20.499681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.153 BaseBdev2 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.153 [ 00:09:24.153 { 00:09:24.153 "name": "BaseBdev2", 00:09:24.153 "aliases": [ 00:09:24.153 "809261ca-df13-4168-ad70-56e3ba323a20" 00:09:24.153 ], 00:09:24.153 "product_name": "Malloc disk", 00:09:24.153 "block_size": 512, 00:09:24.153 "num_blocks": 65536, 00:09:24.153 "uuid": "809261ca-df13-4168-ad70-56e3ba323a20", 00:09:24.153 "assigned_rate_limits": { 00:09:24.153 "rw_ios_per_sec": 0, 00:09:24.153 "rw_mbytes_per_sec": 0, 00:09:24.153 "r_mbytes_per_sec": 0, 00:09:24.153 "w_mbytes_per_sec": 0 00:09:24.153 }, 00:09:24.153 "claimed": true, 00:09:24.153 "claim_type": "exclusive_write", 00:09:24.153 "zoned": false, 00:09:24.153 "supported_io_types": { 00:09:24.153 "read": true, 00:09:24.153 "write": true, 00:09:24.153 "unmap": true, 00:09:24.153 "flush": true, 00:09:24.153 "reset": true, 00:09:24.153 "nvme_admin": false, 00:09:24.153 "nvme_io": false, 00:09:24.153 "nvme_io_md": false, 00:09:24.153 "write_zeroes": true, 00:09:24.153 "zcopy": true, 00:09:24.153 "get_zone_info": false, 00:09:24.153 "zone_management": false, 00:09:24.153 "zone_append": false, 00:09:24.153 "compare": false, 00:09:24.153 "compare_and_write": false, 00:09:24.153 "abort": true, 00:09:24.153 "seek_hole": false, 00:09:24.153 "seek_data": false, 00:09:24.153 "copy": true, 00:09:24.153 "nvme_iov_md": false 00:09:24.153 }, 00:09:24.153 "memory_domains": [ 00:09:24.153 { 00:09:24.153 "dma_device_id": "system", 00:09:24.153 "dma_device_type": 1 00:09:24.153 }, 00:09:24.153 { 00:09:24.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.153 "dma_device_type": 2 00:09:24.153 } 00:09:24.153 ], 00:09:24.153 "driver_specific": {} 00:09:24.153 } 00:09:24.153 ] 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.153 "name": "Existed_Raid", 00:09:24.153 "uuid": "ddcafa2b-1803-4358-a081-244f94826437", 00:09:24.153 "strip_size_kb": 64, 00:09:24.153 "state": "online", 00:09:24.153 "raid_level": "concat", 00:09:24.153 "superblock": false, 00:09:24.153 "num_base_bdevs": 2, 00:09:24.153 "num_base_bdevs_discovered": 2, 00:09:24.153 "num_base_bdevs_operational": 2, 00:09:24.153 "base_bdevs_list": [ 00:09:24.153 { 00:09:24.153 "name": "BaseBdev1", 00:09:24.153 "uuid": "6bf19608-3577-47ed-ad32-ed881685a5e3", 00:09:24.153 "is_configured": true, 00:09:24.153 "data_offset": 0, 00:09:24.153 "data_size": 65536 00:09:24.153 }, 00:09:24.153 { 00:09:24.153 "name": "BaseBdev2", 00:09:24.153 "uuid": "809261ca-df13-4168-ad70-56e3ba323a20", 00:09:24.153 "is_configured": true, 00:09:24.153 "data_offset": 0, 00:09:24.153 "data_size": 65536 00:09:24.153 } 00:09:24.153 ] 00:09:24.153 }' 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.153 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.410 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:24.410 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:24.410 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:24.410 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:24.410 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:24.411 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:24.411 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:24.668 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:24.668 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.668 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.668 [2024-11-27 04:26:21.002070] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:24.668 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.668 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:24.668 "name": "Existed_Raid", 00:09:24.668 "aliases": [ 00:09:24.668 "ddcafa2b-1803-4358-a081-244f94826437" 00:09:24.668 ], 00:09:24.668 "product_name": "Raid Volume", 00:09:24.668 "block_size": 512, 00:09:24.668 "num_blocks": 131072, 00:09:24.668 "uuid": "ddcafa2b-1803-4358-a081-244f94826437", 00:09:24.668 "assigned_rate_limits": { 00:09:24.668 "rw_ios_per_sec": 0, 00:09:24.668 "rw_mbytes_per_sec": 0, 00:09:24.668 "r_mbytes_per_sec": 0, 00:09:24.668 "w_mbytes_per_sec": 0 00:09:24.668 }, 00:09:24.668 "claimed": false, 00:09:24.668 "zoned": false, 00:09:24.668 "supported_io_types": { 00:09:24.668 "read": true, 00:09:24.668 "write": true, 00:09:24.668 "unmap": true, 00:09:24.668 "flush": true, 00:09:24.668 "reset": true, 00:09:24.668 "nvme_admin": false, 00:09:24.668 "nvme_io": false, 00:09:24.668 "nvme_io_md": false, 00:09:24.668 "write_zeroes": true, 00:09:24.668 "zcopy": false, 00:09:24.668 "get_zone_info": false, 00:09:24.668 "zone_management": false, 00:09:24.668 "zone_append": false, 00:09:24.668 "compare": false, 00:09:24.668 "compare_and_write": false, 00:09:24.668 "abort": false, 00:09:24.668 "seek_hole": false, 00:09:24.668 "seek_data": false, 00:09:24.668 "copy": false, 00:09:24.668 "nvme_iov_md": false 00:09:24.668 }, 00:09:24.668 "memory_domains": [ 00:09:24.668 { 00:09:24.668 "dma_device_id": "system", 00:09:24.668 "dma_device_type": 1 00:09:24.668 }, 00:09:24.668 { 00:09:24.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.668 "dma_device_type": 2 00:09:24.668 }, 00:09:24.668 { 00:09:24.668 "dma_device_id": "system", 00:09:24.668 "dma_device_type": 1 00:09:24.668 }, 00:09:24.668 { 00:09:24.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.668 "dma_device_type": 2 00:09:24.668 } 00:09:24.668 ], 00:09:24.669 "driver_specific": { 00:09:24.669 "raid": { 00:09:24.669 "uuid": "ddcafa2b-1803-4358-a081-244f94826437", 00:09:24.669 "strip_size_kb": 64, 00:09:24.669 "state": "online", 00:09:24.669 "raid_level": "concat", 00:09:24.669 "superblock": false, 00:09:24.669 "num_base_bdevs": 2, 00:09:24.669 "num_base_bdevs_discovered": 2, 00:09:24.669 "num_base_bdevs_operational": 2, 00:09:24.669 "base_bdevs_list": [ 00:09:24.669 { 00:09:24.669 "name": "BaseBdev1", 00:09:24.669 "uuid": "6bf19608-3577-47ed-ad32-ed881685a5e3", 00:09:24.669 "is_configured": true, 00:09:24.669 "data_offset": 0, 00:09:24.669 "data_size": 65536 00:09:24.669 }, 00:09:24.669 { 00:09:24.669 "name": "BaseBdev2", 00:09:24.669 "uuid": "809261ca-df13-4168-ad70-56e3ba323a20", 00:09:24.669 "is_configured": true, 00:09:24.669 "data_offset": 0, 00:09:24.669 "data_size": 65536 00:09:24.669 } 00:09:24.669 ] 00:09:24.669 } 00:09:24.669 } 00:09:24.669 }' 00:09:24.669 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:24.669 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:24.669 BaseBdev2' 00:09:24.669 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.669 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:24.669 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.669 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:24.669 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.669 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.669 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.669 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.669 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.669 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.669 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.669 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:24.669 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.669 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.669 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.669 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.669 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.669 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.669 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:24.669 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.669 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.669 [2024-11-27 04:26:21.221417] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:24.669 [2024-11-27 04:26:21.221528] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:24.669 [2024-11-27 04:26:21.221593] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:24.927 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.927 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:24.927 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:24.927 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:24.927 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:24.927 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:24.927 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:24.927 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.927 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:24.927 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.927 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.927 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:24.927 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.927 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.927 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.927 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.927 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.927 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.927 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.927 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.927 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.927 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.927 "name": "Existed_Raid", 00:09:24.927 "uuid": "ddcafa2b-1803-4358-a081-244f94826437", 00:09:24.927 "strip_size_kb": 64, 00:09:24.927 "state": "offline", 00:09:24.927 "raid_level": "concat", 00:09:24.927 "superblock": false, 00:09:24.927 "num_base_bdevs": 2, 00:09:24.927 "num_base_bdevs_discovered": 1, 00:09:24.927 "num_base_bdevs_operational": 1, 00:09:24.927 "base_bdevs_list": [ 00:09:24.927 { 00:09:24.927 "name": null, 00:09:24.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.927 "is_configured": false, 00:09:24.927 "data_offset": 0, 00:09:24.927 "data_size": 65536 00:09:24.927 }, 00:09:24.927 { 00:09:24.927 "name": "BaseBdev2", 00:09:24.927 "uuid": "809261ca-df13-4168-ad70-56e3ba323a20", 00:09:24.927 "is_configured": true, 00:09:24.927 "data_offset": 0, 00:09:24.927 "data_size": 65536 00:09:24.927 } 00:09:24.927 ] 00:09:24.927 }' 00:09:24.927 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.927 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.551 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:25.551 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:25.551 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:25.551 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.551 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.551 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.551 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.551 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:25.551 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:25.551 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:25.551 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.551 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.551 [2024-11-27 04:26:21.835063] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:25.551 [2024-11-27 04:26:21.835199] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:25.551 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.551 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:25.551 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:25.551 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:25.551 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.551 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.551 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.551 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.551 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:25.551 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:25.551 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:25.551 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61862 00:09:25.551 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61862 ']' 00:09:25.551 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61862 00:09:25.551 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:25.551 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.551 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61862 00:09:25.551 killing process with pid 61862 00:09:25.551 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.551 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.551 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61862' 00:09:25.551 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61862 00:09:25.551 [2024-11-27 04:26:22.050359] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:25.551 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61862 00:09:25.551 [2024-11-27 04:26:22.070439] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:26.923 ************************************ 00:09:26.923 END TEST raid_state_function_test 00:09:26.923 ************************************ 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:26.923 00:09:26.923 real 0m5.388s 00:09:26.923 user 0m7.741s 00:09:26.923 sys 0m0.874s 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.923 04:26:23 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:09:26.923 04:26:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:26.923 04:26:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.923 04:26:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:26.923 ************************************ 00:09:26.923 START TEST raid_state_function_test_sb 00:09:26.923 ************************************ 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62115 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62115' 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:26.923 Process raid pid: 62115 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62115 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62115 ']' 00:09:26.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.923 04:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.923 [2024-11-27 04:26:23.479334] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:26.923 [2024-11-27 04:26:23.479488] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.181 [2024-11-27 04:26:23.660875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.439 [2024-11-27 04:26:23.793173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.439 [2024-11-27 04:26:24.016412] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.439 [2024-11-27 04:26:24.016578] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.008 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.008 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:28.008 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:28.008 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.008 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.008 [2024-11-27 04:26:24.396356] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:28.008 [2024-11-27 04:26:24.396492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:28.008 [2024-11-27 04:26:24.396536] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:28.009 [2024-11-27 04:26:24.396571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:28.009 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.009 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:28.009 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.009 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.009 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.009 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.009 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:28.009 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.009 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.009 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.009 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.009 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.009 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.009 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.009 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.009 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.009 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.009 "name": "Existed_Raid", 00:09:28.009 "uuid": "5c7c2018-f02f-4700-b3c2-4c2ee4c986d7", 00:09:28.009 "strip_size_kb": 64, 00:09:28.009 "state": "configuring", 00:09:28.009 "raid_level": "concat", 00:09:28.009 "superblock": true, 00:09:28.009 "num_base_bdevs": 2, 00:09:28.009 "num_base_bdevs_discovered": 0, 00:09:28.009 "num_base_bdevs_operational": 2, 00:09:28.009 "base_bdevs_list": [ 00:09:28.009 { 00:09:28.009 "name": "BaseBdev1", 00:09:28.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.009 "is_configured": false, 00:09:28.009 "data_offset": 0, 00:09:28.009 "data_size": 0 00:09:28.009 }, 00:09:28.009 { 00:09:28.009 "name": "BaseBdev2", 00:09:28.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.009 "is_configured": false, 00:09:28.009 "data_offset": 0, 00:09:28.009 "data_size": 0 00:09:28.009 } 00:09:28.009 ] 00:09:28.009 }' 00:09:28.009 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.009 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.577 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:28.577 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.577 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.577 [2024-11-27 04:26:24.884313] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:28.577 [2024-11-27 04:26:24.884361] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:28.577 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.577 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:28.577 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.577 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.577 [2024-11-27 04:26:24.896286] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:28.577 [2024-11-27 04:26:24.896342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:28.577 [2024-11-27 04:26:24.896356] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:28.577 [2024-11-27 04:26:24.896372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:28.577 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.577 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:28.577 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.577 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.577 [2024-11-27 04:26:24.949556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.577 BaseBdev1 00:09:28.577 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.577 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.578 [ 00:09:28.578 { 00:09:28.578 "name": "BaseBdev1", 00:09:28.578 "aliases": [ 00:09:28.578 "3535320e-522d-4b03-b1a0-5a3b78707dc9" 00:09:28.578 ], 00:09:28.578 "product_name": "Malloc disk", 00:09:28.578 "block_size": 512, 00:09:28.578 "num_blocks": 65536, 00:09:28.578 "uuid": "3535320e-522d-4b03-b1a0-5a3b78707dc9", 00:09:28.578 "assigned_rate_limits": { 00:09:28.578 "rw_ios_per_sec": 0, 00:09:28.578 "rw_mbytes_per_sec": 0, 00:09:28.578 "r_mbytes_per_sec": 0, 00:09:28.578 "w_mbytes_per_sec": 0 00:09:28.578 }, 00:09:28.578 "claimed": true, 00:09:28.578 "claim_type": "exclusive_write", 00:09:28.578 "zoned": false, 00:09:28.578 "supported_io_types": { 00:09:28.578 "read": true, 00:09:28.578 "write": true, 00:09:28.578 "unmap": true, 00:09:28.578 "flush": true, 00:09:28.578 "reset": true, 00:09:28.578 "nvme_admin": false, 00:09:28.578 "nvme_io": false, 00:09:28.578 "nvme_io_md": false, 00:09:28.578 "write_zeroes": true, 00:09:28.578 "zcopy": true, 00:09:28.578 "get_zone_info": false, 00:09:28.578 "zone_management": false, 00:09:28.578 "zone_append": false, 00:09:28.578 "compare": false, 00:09:28.578 "compare_and_write": false, 00:09:28.578 "abort": true, 00:09:28.578 "seek_hole": false, 00:09:28.578 "seek_data": false, 00:09:28.578 "copy": true, 00:09:28.578 "nvme_iov_md": false 00:09:28.578 }, 00:09:28.578 "memory_domains": [ 00:09:28.578 { 00:09:28.578 "dma_device_id": "system", 00:09:28.578 "dma_device_type": 1 00:09:28.578 }, 00:09:28.578 { 00:09:28.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.578 "dma_device_type": 2 00:09:28.578 } 00:09:28.578 ], 00:09:28.578 "driver_specific": {} 00:09:28.578 } 00:09:28.578 ] 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.578 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.578 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.578 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.578 "name": "Existed_Raid", 00:09:28.578 "uuid": "b9ad20e6-5367-41d3-8569-497821de1323", 00:09:28.578 "strip_size_kb": 64, 00:09:28.578 "state": "configuring", 00:09:28.578 "raid_level": "concat", 00:09:28.578 "superblock": true, 00:09:28.578 "num_base_bdevs": 2, 00:09:28.578 "num_base_bdevs_discovered": 1, 00:09:28.578 "num_base_bdevs_operational": 2, 00:09:28.578 "base_bdevs_list": [ 00:09:28.578 { 00:09:28.578 "name": "BaseBdev1", 00:09:28.578 "uuid": "3535320e-522d-4b03-b1a0-5a3b78707dc9", 00:09:28.578 "is_configured": true, 00:09:28.578 "data_offset": 2048, 00:09:28.578 "data_size": 63488 00:09:28.578 }, 00:09:28.578 { 00:09:28.578 "name": "BaseBdev2", 00:09:28.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.578 "is_configured": false, 00:09:28.578 "data_offset": 0, 00:09:28.578 "data_size": 0 00:09:28.578 } 00:09:28.578 ] 00:09:28.578 }' 00:09:28.578 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.578 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.147 [2024-11-27 04:26:25.468854] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:29.147 [2024-11-27 04:26:25.469055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.147 [2024-11-27 04:26:25.480895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.147 [2024-11-27 04:26:25.483294] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:29.147 [2024-11-27 04:26:25.483401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.147 "name": "Existed_Raid", 00:09:29.147 "uuid": "bbb51f07-905d-4db6-87c3-b33ab37bf8eb", 00:09:29.147 "strip_size_kb": 64, 00:09:29.147 "state": "configuring", 00:09:29.147 "raid_level": "concat", 00:09:29.147 "superblock": true, 00:09:29.147 "num_base_bdevs": 2, 00:09:29.147 "num_base_bdevs_discovered": 1, 00:09:29.147 "num_base_bdevs_operational": 2, 00:09:29.147 "base_bdevs_list": [ 00:09:29.147 { 00:09:29.147 "name": "BaseBdev1", 00:09:29.147 "uuid": "3535320e-522d-4b03-b1a0-5a3b78707dc9", 00:09:29.147 "is_configured": true, 00:09:29.147 "data_offset": 2048, 00:09:29.147 "data_size": 63488 00:09:29.147 }, 00:09:29.147 { 00:09:29.147 "name": "BaseBdev2", 00:09:29.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.147 "is_configured": false, 00:09:29.147 "data_offset": 0, 00:09:29.147 "data_size": 0 00:09:29.147 } 00:09:29.147 ] 00:09:29.147 }' 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.147 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.406 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:29.406 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.407 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.666 [2024-11-27 04:26:25.993554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:29.666 [2024-11-27 04:26:25.993897] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:29.666 [2024-11-27 04:26:25.993917] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:29.666 [2024-11-27 04:26:25.994261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:29.666 BaseBdev2 00:09:29.666 [2024-11-27 04:26:25.994468] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:29.666 [2024-11-27 04:26:25.994493] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:29.666 [2024-11-27 04:26:25.994670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.666 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.666 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:29.666 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:29.666 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.667 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:29.667 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.667 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.667 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.667 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.667 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.667 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.667 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:29.667 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.667 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.667 [ 00:09:29.667 { 00:09:29.667 "name": "BaseBdev2", 00:09:29.667 "aliases": [ 00:09:29.667 "0ecd15b4-07bc-4d8e-bf3f-1c719dadf51c" 00:09:29.667 ], 00:09:29.667 "product_name": "Malloc disk", 00:09:29.667 "block_size": 512, 00:09:29.667 "num_blocks": 65536, 00:09:29.667 "uuid": "0ecd15b4-07bc-4d8e-bf3f-1c719dadf51c", 00:09:29.667 "assigned_rate_limits": { 00:09:29.667 "rw_ios_per_sec": 0, 00:09:29.667 "rw_mbytes_per_sec": 0, 00:09:29.667 "r_mbytes_per_sec": 0, 00:09:29.667 "w_mbytes_per_sec": 0 00:09:29.667 }, 00:09:29.667 "claimed": true, 00:09:29.667 "claim_type": "exclusive_write", 00:09:29.667 "zoned": false, 00:09:29.667 "supported_io_types": { 00:09:29.667 "read": true, 00:09:29.667 "write": true, 00:09:29.667 "unmap": true, 00:09:29.667 "flush": true, 00:09:29.667 "reset": true, 00:09:29.667 "nvme_admin": false, 00:09:29.667 "nvme_io": false, 00:09:29.667 "nvme_io_md": false, 00:09:29.667 "write_zeroes": true, 00:09:29.667 "zcopy": true, 00:09:29.667 "get_zone_info": false, 00:09:29.667 "zone_management": false, 00:09:29.667 "zone_append": false, 00:09:29.667 "compare": false, 00:09:29.667 "compare_and_write": false, 00:09:29.667 "abort": true, 00:09:29.667 "seek_hole": false, 00:09:29.667 "seek_data": false, 00:09:29.667 "copy": true, 00:09:29.667 "nvme_iov_md": false 00:09:29.667 }, 00:09:29.667 "memory_domains": [ 00:09:29.667 { 00:09:29.667 "dma_device_id": "system", 00:09:29.667 "dma_device_type": 1 00:09:29.667 }, 00:09:29.667 { 00:09:29.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.667 "dma_device_type": 2 00:09:29.667 } 00:09:29.667 ], 00:09:29.667 "driver_specific": {} 00:09:29.667 } 00:09:29.667 ] 00:09:29.667 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.667 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:29.667 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:29.667 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:29.667 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:29.667 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.667 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.667 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.667 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.667 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:29.667 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.667 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.667 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.667 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.667 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.667 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.667 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.667 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.667 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.667 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.667 "name": "Existed_Raid", 00:09:29.667 "uuid": "bbb51f07-905d-4db6-87c3-b33ab37bf8eb", 00:09:29.667 "strip_size_kb": 64, 00:09:29.667 "state": "online", 00:09:29.667 "raid_level": "concat", 00:09:29.667 "superblock": true, 00:09:29.667 "num_base_bdevs": 2, 00:09:29.667 "num_base_bdevs_discovered": 2, 00:09:29.667 "num_base_bdevs_operational": 2, 00:09:29.667 "base_bdevs_list": [ 00:09:29.667 { 00:09:29.667 "name": "BaseBdev1", 00:09:29.667 "uuid": "3535320e-522d-4b03-b1a0-5a3b78707dc9", 00:09:29.667 "is_configured": true, 00:09:29.667 "data_offset": 2048, 00:09:29.667 "data_size": 63488 00:09:29.667 }, 00:09:29.667 { 00:09:29.667 "name": "BaseBdev2", 00:09:29.667 "uuid": "0ecd15b4-07bc-4d8e-bf3f-1c719dadf51c", 00:09:29.667 "is_configured": true, 00:09:29.667 "data_offset": 2048, 00:09:29.667 "data_size": 63488 00:09:29.667 } 00:09:29.667 ] 00:09:29.667 }' 00:09:29.667 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.667 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.964 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:29.964 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:29.964 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:29.964 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:29.964 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:29.964 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:29.964 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:29.964 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:29.964 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.964 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.964 [2024-11-27 04:26:26.430293] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.964 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.964 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:29.964 "name": "Existed_Raid", 00:09:29.964 "aliases": [ 00:09:29.964 "bbb51f07-905d-4db6-87c3-b33ab37bf8eb" 00:09:29.964 ], 00:09:29.964 "product_name": "Raid Volume", 00:09:29.964 "block_size": 512, 00:09:29.964 "num_blocks": 126976, 00:09:29.964 "uuid": "bbb51f07-905d-4db6-87c3-b33ab37bf8eb", 00:09:29.964 "assigned_rate_limits": { 00:09:29.964 "rw_ios_per_sec": 0, 00:09:29.964 "rw_mbytes_per_sec": 0, 00:09:29.964 "r_mbytes_per_sec": 0, 00:09:29.964 "w_mbytes_per_sec": 0 00:09:29.964 }, 00:09:29.964 "claimed": false, 00:09:29.964 "zoned": false, 00:09:29.964 "supported_io_types": { 00:09:29.964 "read": true, 00:09:29.964 "write": true, 00:09:29.964 "unmap": true, 00:09:29.964 "flush": true, 00:09:29.964 "reset": true, 00:09:29.964 "nvme_admin": false, 00:09:29.964 "nvme_io": false, 00:09:29.964 "nvme_io_md": false, 00:09:29.964 "write_zeroes": true, 00:09:29.964 "zcopy": false, 00:09:29.964 "get_zone_info": false, 00:09:29.964 "zone_management": false, 00:09:29.964 "zone_append": false, 00:09:29.964 "compare": false, 00:09:29.964 "compare_and_write": false, 00:09:29.964 "abort": false, 00:09:29.964 "seek_hole": false, 00:09:29.964 "seek_data": false, 00:09:29.964 "copy": false, 00:09:29.964 "nvme_iov_md": false 00:09:29.964 }, 00:09:29.964 "memory_domains": [ 00:09:29.964 { 00:09:29.964 "dma_device_id": "system", 00:09:29.964 "dma_device_type": 1 00:09:29.964 }, 00:09:29.964 { 00:09:29.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.964 "dma_device_type": 2 00:09:29.964 }, 00:09:29.964 { 00:09:29.964 "dma_device_id": "system", 00:09:29.964 "dma_device_type": 1 00:09:29.964 }, 00:09:29.964 { 00:09:29.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.964 "dma_device_type": 2 00:09:29.964 } 00:09:29.964 ], 00:09:29.964 "driver_specific": { 00:09:29.964 "raid": { 00:09:29.964 "uuid": "bbb51f07-905d-4db6-87c3-b33ab37bf8eb", 00:09:29.964 "strip_size_kb": 64, 00:09:29.964 "state": "online", 00:09:29.964 "raid_level": "concat", 00:09:29.964 "superblock": true, 00:09:29.964 "num_base_bdevs": 2, 00:09:29.964 "num_base_bdevs_discovered": 2, 00:09:29.964 "num_base_bdevs_operational": 2, 00:09:29.964 "base_bdevs_list": [ 00:09:29.964 { 00:09:29.964 "name": "BaseBdev1", 00:09:29.964 "uuid": "3535320e-522d-4b03-b1a0-5a3b78707dc9", 00:09:29.964 "is_configured": true, 00:09:29.964 "data_offset": 2048, 00:09:29.964 "data_size": 63488 00:09:29.964 }, 00:09:29.964 { 00:09:29.964 "name": "BaseBdev2", 00:09:29.964 "uuid": "0ecd15b4-07bc-4d8e-bf3f-1c719dadf51c", 00:09:29.964 "is_configured": true, 00:09:29.964 "data_offset": 2048, 00:09:29.964 "data_size": 63488 00:09:29.964 } 00:09:29.964 ] 00:09:29.964 } 00:09:29.964 } 00:09:29.964 }' 00:09:29.964 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:29.964 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:29.964 BaseBdev2' 00:09:29.964 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.964 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:29.964 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.245 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:30.245 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.245 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.245 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.245 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.245 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.245 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.245 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.245 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:30.245 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.245 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.245 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.245 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.245 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.245 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.245 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:30.245 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.245 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.246 [2024-11-27 04:26:26.632614] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:30.246 [2024-11-27 04:26:26.632730] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.246 [2024-11-27 04:26:26.632826] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.246 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.246 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:30.246 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:30.246 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:30.246 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:30.246 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:30.246 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:30.246 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.246 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:30.246 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.246 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.246 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:30.246 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.246 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.246 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.246 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.246 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.246 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.246 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.246 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.246 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.246 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.246 "name": "Existed_Raid", 00:09:30.246 "uuid": "bbb51f07-905d-4db6-87c3-b33ab37bf8eb", 00:09:30.246 "strip_size_kb": 64, 00:09:30.246 "state": "offline", 00:09:30.246 "raid_level": "concat", 00:09:30.246 "superblock": true, 00:09:30.246 "num_base_bdevs": 2, 00:09:30.246 "num_base_bdevs_discovered": 1, 00:09:30.246 "num_base_bdevs_operational": 1, 00:09:30.246 "base_bdevs_list": [ 00:09:30.246 { 00:09:30.246 "name": null, 00:09:30.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.246 "is_configured": false, 00:09:30.246 "data_offset": 0, 00:09:30.246 "data_size": 63488 00:09:30.246 }, 00:09:30.246 { 00:09:30.246 "name": "BaseBdev2", 00:09:30.246 "uuid": "0ecd15b4-07bc-4d8e-bf3f-1c719dadf51c", 00:09:30.246 "is_configured": true, 00:09:30.246 "data_offset": 2048, 00:09:30.246 "data_size": 63488 00:09:30.246 } 00:09:30.246 ] 00:09:30.246 }' 00:09:30.246 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.246 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.815 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:30.815 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:30.815 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:30.815 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.815 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.815 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.815 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.815 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:30.815 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:30.815 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:30.815 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.815 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.815 [2024-11-27 04:26:27.256939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:30.815 [2024-11-27 04:26:27.257068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:30.815 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.815 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:30.815 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:30.815 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.815 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.815 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.815 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:30.815 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.074 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:31.074 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:31.074 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:31.074 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62115 00:09:31.074 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62115 ']' 00:09:31.074 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62115 00:09:31.074 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:31.074 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.074 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62115 00:09:31.074 killing process with pid 62115 00:09:31.074 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.074 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.074 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62115' 00:09:31.074 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62115 00:09:31.074 [2024-11-27 04:26:27.446744] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:31.074 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62115 00:09:31.074 [2024-11-27 04:26:27.465714] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:32.450 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:32.450 00:09:32.450 real 0m5.290s 00:09:32.450 user 0m7.652s 00:09:32.450 sys 0m0.817s 00:09:32.450 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.450 ************************************ 00:09:32.450 END TEST raid_state_function_test_sb 00:09:32.450 ************************************ 00:09:32.450 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.450 04:26:28 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:09:32.450 04:26:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:32.450 04:26:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.450 04:26:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:32.450 ************************************ 00:09:32.450 START TEST raid_superblock_test 00:09:32.450 ************************************ 00:09:32.450 04:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:09:32.450 04:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:32.450 04:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:32.450 04:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:32.450 04:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:32.450 04:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:32.450 04:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:32.450 04:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:32.450 04:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:32.450 04:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:32.450 04:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:32.450 04:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:32.450 04:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:32.450 04:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:32.450 04:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:32.450 04:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:32.450 04:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:32.450 04:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62367 00:09:32.450 04:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:32.450 04:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62367 00:09:32.450 04:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62367 ']' 00:09:32.450 04:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.450 04:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.450 04:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.450 04:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.450 04:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.450 [2024-11-27 04:26:28.840039] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:32.450 [2024-11-27 04:26:28.840638] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62367 ] 00:09:32.450 [2024-11-27 04:26:29.016056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.708 [2024-11-27 04:26:29.139347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.966 [2024-11-27 04:26:29.354693] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.966 [2024-11-27 04:26:29.354843] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.225 malloc1 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.225 [2024-11-27 04:26:29.751530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:33.225 [2024-11-27 04:26:29.751601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.225 [2024-11-27 04:26:29.751626] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:33.225 [2024-11-27 04:26:29.751637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.225 [2024-11-27 04:26:29.754277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.225 [2024-11-27 04:26:29.754318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:33.225 pt1 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.225 malloc2 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.225 04:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.485 [2024-11-27 04:26:29.812446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:33.485 [2024-11-27 04:26:29.812573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.485 [2024-11-27 04:26:29.812625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:33.485 [2024-11-27 04:26:29.812662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.485 [2024-11-27 04:26:29.815063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.485 [2024-11-27 04:26:29.815158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:33.485 pt2 00:09:33.485 04:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.485 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:33.485 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:33.485 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:33.485 04:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.485 04:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.485 [2024-11-27 04:26:29.824496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:33.485 [2024-11-27 04:26:29.827061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:33.485 [2024-11-27 04:26:29.827384] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:33.485 [2024-11-27 04:26:29.827459] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:33.485 [2024-11-27 04:26:29.827887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:33.485 [2024-11-27 04:26:29.828218] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:33.485 [2024-11-27 04:26:29.828301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:33.485 [2024-11-27 04:26:29.828677] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.485 04:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.485 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:33.485 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.485 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.485 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.485 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.485 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.485 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.485 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.485 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.485 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.485 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.485 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.485 04:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.485 04:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.485 04:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.485 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.485 "name": "raid_bdev1", 00:09:33.485 "uuid": "5150c2cf-7e80-4497-8768-395cb0fd9572", 00:09:33.485 "strip_size_kb": 64, 00:09:33.485 "state": "online", 00:09:33.485 "raid_level": "concat", 00:09:33.485 "superblock": true, 00:09:33.485 "num_base_bdevs": 2, 00:09:33.485 "num_base_bdevs_discovered": 2, 00:09:33.485 "num_base_bdevs_operational": 2, 00:09:33.485 "base_bdevs_list": [ 00:09:33.485 { 00:09:33.485 "name": "pt1", 00:09:33.485 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:33.485 "is_configured": true, 00:09:33.485 "data_offset": 2048, 00:09:33.485 "data_size": 63488 00:09:33.485 }, 00:09:33.485 { 00:09:33.485 "name": "pt2", 00:09:33.485 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:33.485 "is_configured": true, 00:09:33.485 "data_offset": 2048, 00:09:33.485 "data_size": 63488 00:09:33.485 } 00:09:33.485 ] 00:09:33.485 }' 00:09:33.485 04:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.485 04:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.745 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:33.745 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:33.745 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:33.745 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:33.745 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:33.745 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:33.745 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:33.745 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.745 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:33.745 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.745 [2024-11-27 04:26:30.328100] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.005 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.005 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:34.005 "name": "raid_bdev1", 00:09:34.005 "aliases": [ 00:09:34.005 "5150c2cf-7e80-4497-8768-395cb0fd9572" 00:09:34.005 ], 00:09:34.005 "product_name": "Raid Volume", 00:09:34.005 "block_size": 512, 00:09:34.005 "num_blocks": 126976, 00:09:34.005 "uuid": "5150c2cf-7e80-4497-8768-395cb0fd9572", 00:09:34.005 "assigned_rate_limits": { 00:09:34.005 "rw_ios_per_sec": 0, 00:09:34.005 "rw_mbytes_per_sec": 0, 00:09:34.005 "r_mbytes_per_sec": 0, 00:09:34.005 "w_mbytes_per_sec": 0 00:09:34.005 }, 00:09:34.005 "claimed": false, 00:09:34.005 "zoned": false, 00:09:34.005 "supported_io_types": { 00:09:34.005 "read": true, 00:09:34.005 "write": true, 00:09:34.005 "unmap": true, 00:09:34.005 "flush": true, 00:09:34.005 "reset": true, 00:09:34.005 "nvme_admin": false, 00:09:34.005 "nvme_io": false, 00:09:34.005 "nvme_io_md": false, 00:09:34.005 "write_zeroes": true, 00:09:34.005 "zcopy": false, 00:09:34.005 "get_zone_info": false, 00:09:34.005 "zone_management": false, 00:09:34.005 "zone_append": false, 00:09:34.005 "compare": false, 00:09:34.005 "compare_and_write": false, 00:09:34.005 "abort": false, 00:09:34.005 "seek_hole": false, 00:09:34.005 "seek_data": false, 00:09:34.005 "copy": false, 00:09:34.005 "nvme_iov_md": false 00:09:34.005 }, 00:09:34.005 "memory_domains": [ 00:09:34.005 { 00:09:34.005 "dma_device_id": "system", 00:09:34.005 "dma_device_type": 1 00:09:34.005 }, 00:09:34.005 { 00:09:34.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.005 "dma_device_type": 2 00:09:34.005 }, 00:09:34.005 { 00:09:34.005 "dma_device_id": "system", 00:09:34.005 "dma_device_type": 1 00:09:34.005 }, 00:09:34.005 { 00:09:34.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.005 "dma_device_type": 2 00:09:34.005 } 00:09:34.005 ], 00:09:34.005 "driver_specific": { 00:09:34.005 "raid": { 00:09:34.005 "uuid": "5150c2cf-7e80-4497-8768-395cb0fd9572", 00:09:34.005 "strip_size_kb": 64, 00:09:34.005 "state": "online", 00:09:34.005 "raid_level": "concat", 00:09:34.005 "superblock": true, 00:09:34.005 "num_base_bdevs": 2, 00:09:34.005 "num_base_bdevs_discovered": 2, 00:09:34.005 "num_base_bdevs_operational": 2, 00:09:34.005 "base_bdevs_list": [ 00:09:34.005 { 00:09:34.005 "name": "pt1", 00:09:34.005 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:34.005 "is_configured": true, 00:09:34.005 "data_offset": 2048, 00:09:34.005 "data_size": 63488 00:09:34.005 }, 00:09:34.005 { 00:09:34.005 "name": "pt2", 00:09:34.005 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.005 "is_configured": true, 00:09:34.005 "data_offset": 2048, 00:09:34.005 "data_size": 63488 00:09:34.005 } 00:09:34.005 ] 00:09:34.005 } 00:09:34.005 } 00:09:34.005 }' 00:09:34.005 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:34.005 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:34.005 pt2' 00:09:34.005 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.005 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:34.005 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.005 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.005 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:34.005 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.005 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.005 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.005 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.005 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.006 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.006 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:34.006 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.006 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.006 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.006 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.006 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.006 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.006 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:34.006 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:34.006 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.006 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.006 [2024-11-27 04:26:30.575616] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5150c2cf-7e80-4497-8768-395cb0fd9572 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5150c2cf-7e80-4497-8768-395cb0fd9572 ']' 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.266 [2024-11-27 04:26:30.619224] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:34.266 [2024-11-27 04:26:30.619259] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.266 [2024-11-27 04:26:30.619356] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.266 [2024-11-27 04:26:30.619410] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:34.266 [2024-11-27 04:26:30.619423] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.266 [2024-11-27 04:26:30.751053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:34.266 [2024-11-27 04:26:30.753280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:34.266 [2024-11-27 04:26:30.753363] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:34.266 [2024-11-27 04:26:30.753425] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:34.266 [2024-11-27 04:26:30.753444] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:34.266 [2024-11-27 04:26:30.753457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:34.266 request: 00:09:34.266 { 00:09:34.266 "name": "raid_bdev1", 00:09:34.266 "raid_level": "concat", 00:09:34.266 "base_bdevs": [ 00:09:34.266 "malloc1", 00:09:34.266 "malloc2" 00:09:34.266 ], 00:09:34.266 "strip_size_kb": 64, 00:09:34.266 "superblock": false, 00:09:34.266 "method": "bdev_raid_create", 00:09:34.266 "req_id": 1 00:09:34.266 } 00:09:34.266 Got JSON-RPC error response 00:09:34.266 response: 00:09:34.266 { 00:09:34.266 "code": -17, 00:09:34.266 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:34.266 } 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.266 [2024-11-27 04:26:30.818917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:34.266 [2024-11-27 04:26:30.819053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.266 [2024-11-27 04:26:30.819103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:34.266 [2024-11-27 04:26:30.819160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.266 [2024-11-27 04:26:30.821721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.266 [2024-11-27 04:26:30.821811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:34.266 [2024-11-27 04:26:30.821938] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:34.266 [2024-11-27 04:26:30.822040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:34.266 pt1 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.266 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.267 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.267 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.267 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.533 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.533 "name": "raid_bdev1", 00:09:34.533 "uuid": "5150c2cf-7e80-4497-8768-395cb0fd9572", 00:09:34.533 "strip_size_kb": 64, 00:09:34.533 "state": "configuring", 00:09:34.533 "raid_level": "concat", 00:09:34.533 "superblock": true, 00:09:34.533 "num_base_bdevs": 2, 00:09:34.533 "num_base_bdevs_discovered": 1, 00:09:34.533 "num_base_bdevs_operational": 2, 00:09:34.533 "base_bdevs_list": [ 00:09:34.533 { 00:09:34.533 "name": "pt1", 00:09:34.533 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:34.533 "is_configured": true, 00:09:34.533 "data_offset": 2048, 00:09:34.533 "data_size": 63488 00:09:34.533 }, 00:09:34.533 { 00:09:34.533 "name": null, 00:09:34.533 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.533 "is_configured": false, 00:09:34.533 "data_offset": 2048, 00:09:34.533 "data_size": 63488 00:09:34.533 } 00:09:34.533 ] 00:09:34.533 }' 00:09:34.533 04:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.533 04:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.807 [2024-11-27 04:26:31.278141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:34.807 [2024-11-27 04:26:31.278292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.807 [2024-11-27 04:26:31.278338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:34.807 [2024-11-27 04:26:31.278392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.807 [2024-11-27 04:26:31.278963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.807 [2024-11-27 04:26:31.279041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:34.807 [2024-11-27 04:26:31.279195] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:34.807 [2024-11-27 04:26:31.279268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:34.807 [2024-11-27 04:26:31.279463] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:34.807 [2024-11-27 04:26:31.279509] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:34.807 [2024-11-27 04:26:31.279813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:34.807 [2024-11-27 04:26:31.280022] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:34.807 [2024-11-27 04:26:31.280081] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:34.807 [2024-11-27 04:26:31.280329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.807 pt2 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.807 "name": "raid_bdev1", 00:09:34.807 "uuid": "5150c2cf-7e80-4497-8768-395cb0fd9572", 00:09:34.807 "strip_size_kb": 64, 00:09:34.807 "state": "online", 00:09:34.807 "raid_level": "concat", 00:09:34.807 "superblock": true, 00:09:34.807 "num_base_bdevs": 2, 00:09:34.807 "num_base_bdevs_discovered": 2, 00:09:34.807 "num_base_bdevs_operational": 2, 00:09:34.807 "base_bdevs_list": [ 00:09:34.807 { 00:09:34.807 "name": "pt1", 00:09:34.807 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:34.807 "is_configured": true, 00:09:34.807 "data_offset": 2048, 00:09:34.807 "data_size": 63488 00:09:34.807 }, 00:09:34.807 { 00:09:34.807 "name": "pt2", 00:09:34.807 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.807 "is_configured": true, 00:09:34.807 "data_offset": 2048, 00:09:34.807 "data_size": 63488 00:09:34.807 } 00:09:34.807 ] 00:09:34.807 }' 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.807 04:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.377 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:35.377 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:35.377 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:35.377 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:35.377 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:35.377 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:35.377 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:35.377 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:35.377 04:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.377 04:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.377 [2024-11-27 04:26:31.765548] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.377 04:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.378 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.378 "name": "raid_bdev1", 00:09:35.378 "aliases": [ 00:09:35.378 "5150c2cf-7e80-4497-8768-395cb0fd9572" 00:09:35.378 ], 00:09:35.378 "product_name": "Raid Volume", 00:09:35.378 "block_size": 512, 00:09:35.378 "num_blocks": 126976, 00:09:35.378 "uuid": "5150c2cf-7e80-4497-8768-395cb0fd9572", 00:09:35.378 "assigned_rate_limits": { 00:09:35.378 "rw_ios_per_sec": 0, 00:09:35.378 "rw_mbytes_per_sec": 0, 00:09:35.378 "r_mbytes_per_sec": 0, 00:09:35.378 "w_mbytes_per_sec": 0 00:09:35.378 }, 00:09:35.378 "claimed": false, 00:09:35.378 "zoned": false, 00:09:35.378 "supported_io_types": { 00:09:35.378 "read": true, 00:09:35.378 "write": true, 00:09:35.378 "unmap": true, 00:09:35.378 "flush": true, 00:09:35.378 "reset": true, 00:09:35.378 "nvme_admin": false, 00:09:35.378 "nvme_io": false, 00:09:35.378 "nvme_io_md": false, 00:09:35.378 "write_zeroes": true, 00:09:35.378 "zcopy": false, 00:09:35.378 "get_zone_info": false, 00:09:35.378 "zone_management": false, 00:09:35.378 "zone_append": false, 00:09:35.378 "compare": false, 00:09:35.378 "compare_and_write": false, 00:09:35.378 "abort": false, 00:09:35.378 "seek_hole": false, 00:09:35.378 "seek_data": false, 00:09:35.378 "copy": false, 00:09:35.378 "nvme_iov_md": false 00:09:35.378 }, 00:09:35.378 "memory_domains": [ 00:09:35.378 { 00:09:35.378 "dma_device_id": "system", 00:09:35.378 "dma_device_type": 1 00:09:35.378 }, 00:09:35.378 { 00:09:35.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.378 "dma_device_type": 2 00:09:35.378 }, 00:09:35.378 { 00:09:35.378 "dma_device_id": "system", 00:09:35.378 "dma_device_type": 1 00:09:35.378 }, 00:09:35.378 { 00:09:35.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.378 "dma_device_type": 2 00:09:35.378 } 00:09:35.378 ], 00:09:35.378 "driver_specific": { 00:09:35.378 "raid": { 00:09:35.378 "uuid": "5150c2cf-7e80-4497-8768-395cb0fd9572", 00:09:35.378 "strip_size_kb": 64, 00:09:35.378 "state": "online", 00:09:35.378 "raid_level": "concat", 00:09:35.378 "superblock": true, 00:09:35.378 "num_base_bdevs": 2, 00:09:35.378 "num_base_bdevs_discovered": 2, 00:09:35.378 "num_base_bdevs_operational": 2, 00:09:35.378 "base_bdevs_list": [ 00:09:35.378 { 00:09:35.378 "name": "pt1", 00:09:35.378 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:35.378 "is_configured": true, 00:09:35.378 "data_offset": 2048, 00:09:35.378 "data_size": 63488 00:09:35.378 }, 00:09:35.378 { 00:09:35.378 "name": "pt2", 00:09:35.378 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.378 "is_configured": true, 00:09:35.378 "data_offset": 2048, 00:09:35.378 "data_size": 63488 00:09:35.378 } 00:09:35.378 ] 00:09:35.378 } 00:09:35.378 } 00:09:35.378 }' 00:09:35.378 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.378 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:35.378 pt2' 00:09:35.378 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.378 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.378 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.378 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:35.378 04:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.378 04:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.378 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.378 04:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.378 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.378 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.378 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.378 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:35.378 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.378 04:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.378 04:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.637 04:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.637 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.637 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.637 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:35.637 04:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:35.637 04:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.637 04:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.637 [2024-11-27 04:26:31.981278] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.637 04:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.637 04:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5150c2cf-7e80-4497-8768-395cb0fd9572 '!=' 5150c2cf-7e80-4497-8768-395cb0fd9572 ']' 00:09:35.637 04:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:35.637 04:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:35.637 04:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:35.637 04:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62367 00:09:35.637 04:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62367 ']' 00:09:35.637 04:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62367 00:09:35.637 04:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:35.637 04:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.637 04:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62367 00:09:35.637 killing process with pid 62367 00:09:35.637 04:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.637 04:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.637 04:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62367' 00:09:35.637 04:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62367 00:09:35.637 [2024-11-27 04:26:32.068299] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:35.637 [2024-11-27 04:26:32.068409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.637 [2024-11-27 04:26:32.068470] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.637 04:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62367 00:09:35.637 [2024-11-27 04:26:32.068484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:35.897 [2024-11-27 04:26:32.318051] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:37.281 ************************************ 00:09:37.281 END TEST raid_superblock_test 00:09:37.281 ************************************ 00:09:37.281 04:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:37.281 00:09:37.281 real 0m4.923s 00:09:37.281 user 0m6.861s 00:09:37.281 sys 0m0.731s 00:09:37.281 04:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.281 04:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.281 04:26:33 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:09:37.281 04:26:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:37.281 04:26:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.281 04:26:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:37.281 ************************************ 00:09:37.281 START TEST raid_read_error_test 00:09:37.281 ************************************ 00:09:37.281 04:26:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:09:37.281 04:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:37.281 04:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:37.281 04:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:37.281 04:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:37.281 04:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.281 04:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:37.281 04:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:37.281 04:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.281 04:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:37.281 04:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:37.282 04:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.282 04:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:37.282 04:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:37.282 04:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:37.282 04:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:37.282 04:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:37.282 04:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:37.282 04:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:37.282 04:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:37.282 04:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:37.282 04:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:37.282 04:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:37.282 04:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9RPf1KNc7n 00:09:37.282 04:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62579 00:09:37.282 04:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:37.282 04:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62579 00:09:37.282 04:26:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62579 ']' 00:09:37.282 04:26:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.282 04:26:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.282 04:26:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.282 04:26:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.282 04:26:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.282 [2024-11-27 04:26:33.845661] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:37.282 [2024-11-27 04:26:33.845873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62579 ] 00:09:37.540 [2024-11-27 04:26:34.026848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.799 [2024-11-27 04:26:34.150859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.799 [2024-11-27 04:26:34.362344] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.799 [2024-11-27 04:26:34.362492] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.367 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.367 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:38.367 04:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:38.367 04:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:38.367 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.367 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.367 BaseBdev1_malloc 00:09:38.367 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.367 04:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:38.367 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.368 true 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.368 [2024-11-27 04:26:34.801010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:38.368 [2024-11-27 04:26:34.801081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.368 [2024-11-27 04:26:34.801137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:38.368 [2024-11-27 04:26:34.801150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.368 [2024-11-27 04:26:34.803382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.368 [2024-11-27 04:26:34.803433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:38.368 BaseBdev1 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.368 BaseBdev2_malloc 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.368 true 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.368 [2024-11-27 04:26:34.869736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:38.368 [2024-11-27 04:26:34.869814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.368 [2024-11-27 04:26:34.869837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:38.368 [2024-11-27 04:26:34.869847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.368 [2024-11-27 04:26:34.872076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.368 [2024-11-27 04:26:34.872126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:38.368 BaseBdev2 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.368 [2024-11-27 04:26:34.881801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.368 [2024-11-27 04:26:34.883739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.368 [2024-11-27 04:26:34.884076] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:38.368 [2024-11-27 04:26:34.884111] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:38.368 [2024-11-27 04:26:34.884428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:38.368 [2024-11-27 04:26:34.884637] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:38.368 [2024-11-27 04:26:34.884649] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:38.368 [2024-11-27 04:26:34.884844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.368 "name": "raid_bdev1", 00:09:38.368 "uuid": "ad50e246-1050-42fd-bad3-2c8972f9bba1", 00:09:38.368 "strip_size_kb": 64, 00:09:38.368 "state": "online", 00:09:38.368 "raid_level": "concat", 00:09:38.368 "superblock": true, 00:09:38.368 "num_base_bdevs": 2, 00:09:38.368 "num_base_bdevs_discovered": 2, 00:09:38.368 "num_base_bdevs_operational": 2, 00:09:38.368 "base_bdevs_list": [ 00:09:38.368 { 00:09:38.368 "name": "BaseBdev1", 00:09:38.368 "uuid": "f9fb8d36-588d-5206-8853-fc2b3e464e9d", 00:09:38.368 "is_configured": true, 00:09:38.368 "data_offset": 2048, 00:09:38.368 "data_size": 63488 00:09:38.368 }, 00:09:38.368 { 00:09:38.368 "name": "BaseBdev2", 00:09:38.368 "uuid": "eed73700-8c9b-5e0c-8769-9d592dd65464", 00:09:38.368 "is_configured": true, 00:09:38.368 "data_offset": 2048, 00:09:38.368 "data_size": 63488 00:09:38.368 } 00:09:38.368 ] 00:09:38.368 }' 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.368 04:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.938 04:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:38.938 04:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:38.938 [2024-11-27 04:26:35.451117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:39.907 04:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:39.907 04:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.907 04:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.907 04:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.907 04:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:39.907 04:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:39.907 04:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:39.907 04:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:39.907 04:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.907 04:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.907 04:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.907 04:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.907 04:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:39.907 04:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.907 04:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.907 04:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.907 04:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.907 04:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.907 04:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.907 04:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.907 04:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.907 04:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.907 04:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.907 "name": "raid_bdev1", 00:09:39.907 "uuid": "ad50e246-1050-42fd-bad3-2c8972f9bba1", 00:09:39.907 "strip_size_kb": 64, 00:09:39.907 "state": "online", 00:09:39.907 "raid_level": "concat", 00:09:39.907 "superblock": true, 00:09:39.907 "num_base_bdevs": 2, 00:09:39.907 "num_base_bdevs_discovered": 2, 00:09:39.907 "num_base_bdevs_operational": 2, 00:09:39.907 "base_bdevs_list": [ 00:09:39.907 { 00:09:39.907 "name": "BaseBdev1", 00:09:39.907 "uuid": "f9fb8d36-588d-5206-8853-fc2b3e464e9d", 00:09:39.907 "is_configured": true, 00:09:39.907 "data_offset": 2048, 00:09:39.907 "data_size": 63488 00:09:39.907 }, 00:09:39.907 { 00:09:39.907 "name": "BaseBdev2", 00:09:39.907 "uuid": "eed73700-8c9b-5e0c-8769-9d592dd65464", 00:09:39.907 "is_configured": true, 00:09:39.907 "data_offset": 2048, 00:09:39.907 "data_size": 63488 00:09:39.907 } 00:09:39.907 ] 00:09:39.907 }' 00:09:39.907 04:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.907 04:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.475 04:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:40.475 04:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.475 04:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.475 [2024-11-27 04:26:36.808925] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:40.475 [2024-11-27 04:26:36.809007] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.475 [2024-11-27 04:26:36.816001] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.475 { 00:09:40.475 "results": [ 00:09:40.475 { 00:09:40.475 "job": "raid_bdev1", 00:09:40.475 "core_mask": "0x1", 00:09:40.475 "workload": "randrw", 00:09:40.475 "percentage": 50, 00:09:40.475 "status": "finished", 00:09:40.475 "queue_depth": 1, 00:09:40.475 "io_size": 131072, 00:09:40.475 "runtime": 1.358277, 00:09:40.475 "iops": 12557.821416397392, 00:09:40.475 "mibps": 1569.727677049674, 00:09:40.475 "io_failed": 1, 00:09:40.475 "io_timeout": 0, 00:09:40.475 "avg_latency_us": 111.48102958260566, 00:09:40.475 "min_latency_us": 27.053275109170304, 00:09:40.475 "max_latency_us": 1667.0183406113538 00:09:40.475 } 00:09:40.475 ], 00:09:40.475 "core_count": 1 00:09:40.475 } 00:09:40.475 [2024-11-27 04:26:36.816329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.475 [2024-11-27 04:26:36.816424] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.475 [2024-11-27 04:26:36.816455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:40.475 04:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.475 04:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62579 00:09:40.475 04:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62579 ']' 00:09:40.475 04:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62579 00:09:40.475 04:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:40.475 04:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.475 04:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62579 00:09:40.475 04:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.475 04:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.475 04:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62579' 00:09:40.475 killing process with pid 62579 00:09:40.475 04:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62579 00:09:40.475 [2024-11-27 04:26:36.868511] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:40.475 04:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62579 00:09:40.475 [2024-11-27 04:26:37.031399] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:41.854 04:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9RPf1KNc7n 00:09:41.854 04:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:41.854 04:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:41.854 04:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:41.854 04:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:41.854 ************************************ 00:09:41.854 END TEST raid_read_error_test 00:09:41.854 ************************************ 00:09:41.854 04:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:41.854 04:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:41.854 04:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:41.854 00:09:41.854 real 0m4.681s 00:09:41.854 user 0m5.578s 00:09:41.854 sys 0m0.601s 00:09:41.854 04:26:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.854 04:26:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.113 04:26:38 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:09:42.114 04:26:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:42.114 04:26:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.114 04:26:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:42.114 ************************************ 00:09:42.114 START TEST raid_write_error_test 00:09:42.114 ************************************ 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2GpIi0sK32 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62730 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62730 00:09:42.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62730 ']' 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.114 04:26:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.114 [2024-11-27 04:26:38.599459] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:42.114 [2024-11-27 04:26:38.599580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62730 ] 00:09:42.374 [2024-11-27 04:26:38.757197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.374 [2024-11-27 04:26:38.903514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.634 [2024-11-27 04:26:39.155011] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.634 [2024-11-27 04:26:39.155070] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.203 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.204 BaseBdev1_malloc 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.204 true 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.204 [2024-11-27 04:26:39.555731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:43.204 [2024-11-27 04:26:39.555807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.204 [2024-11-27 04:26:39.555833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:43.204 [2024-11-27 04:26:39.555847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.204 [2024-11-27 04:26:39.558599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.204 [2024-11-27 04:26:39.558642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:43.204 BaseBdev1 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.204 BaseBdev2_malloc 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.204 true 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.204 [2024-11-27 04:26:39.633376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:43.204 [2024-11-27 04:26:39.633465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.204 [2024-11-27 04:26:39.633490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:43.204 [2024-11-27 04:26:39.633504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.204 [2024-11-27 04:26:39.636331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.204 [2024-11-27 04:26:39.636377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:43.204 BaseBdev2 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.204 [2024-11-27 04:26:39.645505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:43.204 [2024-11-27 04:26:39.648135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:43.204 [2024-11-27 04:26:39.648404] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:43.204 [2024-11-27 04:26:39.648428] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:43.204 [2024-11-27 04:26:39.648765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:43.204 [2024-11-27 04:26:39.649006] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:43.204 [2024-11-27 04:26:39.649027] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:43.204 [2024-11-27 04:26:39.649348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.204 "name": "raid_bdev1", 00:09:43.204 "uuid": "d7bba28a-fba3-4e03-8450-35e01b672e5d", 00:09:43.204 "strip_size_kb": 64, 00:09:43.204 "state": "online", 00:09:43.204 "raid_level": "concat", 00:09:43.204 "superblock": true, 00:09:43.204 "num_base_bdevs": 2, 00:09:43.204 "num_base_bdevs_discovered": 2, 00:09:43.204 "num_base_bdevs_operational": 2, 00:09:43.204 "base_bdevs_list": [ 00:09:43.204 { 00:09:43.204 "name": "BaseBdev1", 00:09:43.204 "uuid": "5e8b4420-35fe-5d3d-80a8-ec73d1f53a1c", 00:09:43.204 "is_configured": true, 00:09:43.204 "data_offset": 2048, 00:09:43.204 "data_size": 63488 00:09:43.204 }, 00:09:43.204 { 00:09:43.204 "name": "BaseBdev2", 00:09:43.204 "uuid": "2e602356-9161-5de0-854b-7984afc35d95", 00:09:43.204 "is_configured": true, 00:09:43.204 "data_offset": 2048, 00:09:43.204 "data_size": 63488 00:09:43.204 } 00:09:43.204 ] 00:09:43.204 }' 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.204 04:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.773 04:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:43.773 04:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:43.773 [2024-11-27 04:26:40.233817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:44.711 04:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:44.711 04:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.711 04:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.711 04:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.711 04:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:44.711 04:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:44.711 04:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:44.711 04:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:44.711 04:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.711 04:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.711 04:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.711 04:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.711 04:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:44.711 04:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.711 04:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.711 04:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.711 04:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.711 04:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.711 04:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.711 04:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.711 04:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.711 04:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.711 04:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.711 "name": "raid_bdev1", 00:09:44.711 "uuid": "d7bba28a-fba3-4e03-8450-35e01b672e5d", 00:09:44.711 "strip_size_kb": 64, 00:09:44.711 "state": "online", 00:09:44.711 "raid_level": "concat", 00:09:44.711 "superblock": true, 00:09:44.711 "num_base_bdevs": 2, 00:09:44.711 "num_base_bdevs_discovered": 2, 00:09:44.711 "num_base_bdevs_operational": 2, 00:09:44.711 "base_bdevs_list": [ 00:09:44.711 { 00:09:44.711 "name": "BaseBdev1", 00:09:44.711 "uuid": "5e8b4420-35fe-5d3d-80a8-ec73d1f53a1c", 00:09:44.711 "is_configured": true, 00:09:44.711 "data_offset": 2048, 00:09:44.711 "data_size": 63488 00:09:44.711 }, 00:09:44.711 { 00:09:44.711 "name": "BaseBdev2", 00:09:44.711 "uuid": "2e602356-9161-5de0-854b-7984afc35d95", 00:09:44.711 "is_configured": true, 00:09:44.711 "data_offset": 2048, 00:09:44.711 "data_size": 63488 00:09:44.711 } 00:09:44.711 ] 00:09:44.711 }' 00:09:44.711 04:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.711 04:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.280 04:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:45.280 04:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.280 04:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.280 [2024-11-27 04:26:41.614993] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.280 [2024-11-27 04:26:41.615052] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.280 [2024-11-27 04:26:41.618065] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.280 [2024-11-27 04:26:41.618146] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.280 [2024-11-27 04:26:41.618184] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.280 [2024-11-27 04:26:41.618203] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:45.280 { 00:09:45.280 "results": [ 00:09:45.280 { 00:09:45.280 "job": "raid_bdev1", 00:09:45.280 "core_mask": "0x1", 00:09:45.280 "workload": "randrw", 00:09:45.280 "percentage": 50, 00:09:45.280 "status": "finished", 00:09:45.280 "queue_depth": 1, 00:09:45.280 "io_size": 131072, 00:09:45.280 "runtime": 1.381953, 00:09:45.280 "iops": 13009.12549124319, 00:09:45.281 "mibps": 1626.1406864053988, 00:09:45.281 "io_failed": 1, 00:09:45.281 "io_timeout": 0, 00:09:45.281 "avg_latency_us": 107.70928820159182, 00:09:45.281 "min_latency_us": 24.482096069868994, 00:09:45.281 "max_latency_us": 1566.8541484716156 00:09:45.281 } 00:09:45.281 ], 00:09:45.281 "core_count": 1 00:09:45.281 } 00:09:45.281 04:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.281 04:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62730 00:09:45.281 04:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62730 ']' 00:09:45.281 04:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62730 00:09:45.281 04:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:45.281 04:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.281 04:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62730 00:09:45.281 04:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:45.281 04:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:45.281 04:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62730' 00:09:45.281 killing process with pid 62730 00:09:45.281 04:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62730 00:09:45.281 [2024-11-27 04:26:41.670461] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:45.281 04:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62730 00:09:45.281 [2024-11-27 04:26:41.843081] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:46.658 04:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:46.659 04:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2GpIi0sK32 00:09:46.659 04:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:46.659 04:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:46.659 04:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:46.659 04:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:46.659 04:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:46.659 04:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:46.659 00:09:46.659 real 0m4.752s 00:09:46.659 user 0m5.635s 00:09:46.659 sys 0m0.654s 00:09:46.659 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.659 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.659 ************************************ 00:09:46.659 END TEST raid_write_error_test 00:09:46.659 ************************************ 00:09:46.919 04:26:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:46.919 04:26:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:09:46.919 04:26:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:46.919 04:26:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.919 04:26:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:46.919 ************************************ 00:09:46.919 START TEST raid_state_function_test 00:09:46.919 ************************************ 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62874 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62874' 00:09:46.919 Process raid pid: 62874 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62874 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62874 ']' 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.919 04:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.919 [2024-11-27 04:26:43.418686] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:46.919 [2024-11-27 04:26:43.419474] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.179 [2024-11-27 04:26:43.604739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.179 [2024-11-27 04:26:43.758572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.748 [2024-11-27 04:26:44.043121] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.748 [2024-11-27 04:26:44.043182] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.748 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.748 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:47.748 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:47.748 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.748 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.748 [2024-11-27 04:26:44.265716] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:47.748 [2024-11-27 04:26:44.265795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:47.748 [2024-11-27 04:26:44.265807] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:47.748 [2024-11-27 04:26:44.265818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:47.748 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.748 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:47.748 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.748 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.748 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.748 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.748 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:47.748 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.748 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.748 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.748 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.748 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.748 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.748 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.748 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.748 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.748 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.748 "name": "Existed_Raid", 00:09:47.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.748 "strip_size_kb": 0, 00:09:47.748 "state": "configuring", 00:09:47.748 "raid_level": "raid1", 00:09:47.748 "superblock": false, 00:09:47.748 "num_base_bdevs": 2, 00:09:47.748 "num_base_bdevs_discovered": 0, 00:09:47.748 "num_base_bdevs_operational": 2, 00:09:47.748 "base_bdevs_list": [ 00:09:47.748 { 00:09:47.748 "name": "BaseBdev1", 00:09:47.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.748 "is_configured": false, 00:09:47.748 "data_offset": 0, 00:09:47.748 "data_size": 0 00:09:47.748 }, 00:09:47.748 { 00:09:47.748 "name": "BaseBdev2", 00:09:47.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.748 "is_configured": false, 00:09:47.748 "data_offset": 0, 00:09:47.748 "data_size": 0 00:09:47.748 } 00:09:47.748 ] 00:09:47.748 }' 00:09:47.749 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.749 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.359 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:48.359 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.359 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.359 [2024-11-27 04:26:44.689020] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:48.359 [2024-11-27 04:26:44.689080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:48.359 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.359 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:48.359 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.359 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.359 [2024-11-27 04:26:44.696967] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:48.359 [2024-11-27 04:26:44.697040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:48.359 [2024-11-27 04:26:44.697052] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:48.359 [2024-11-27 04:26:44.697066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:48.359 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.359 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:48.359 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.359 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.359 [2024-11-27 04:26:44.754824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.359 BaseBdev1 00:09:48.359 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.359 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:48.359 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:48.359 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.360 [ 00:09:48.360 { 00:09:48.360 "name": "BaseBdev1", 00:09:48.360 "aliases": [ 00:09:48.360 "dda5a5fb-00cb-4532-ade4-e9374f2c6b6d" 00:09:48.360 ], 00:09:48.360 "product_name": "Malloc disk", 00:09:48.360 "block_size": 512, 00:09:48.360 "num_blocks": 65536, 00:09:48.360 "uuid": "dda5a5fb-00cb-4532-ade4-e9374f2c6b6d", 00:09:48.360 "assigned_rate_limits": { 00:09:48.360 "rw_ios_per_sec": 0, 00:09:48.360 "rw_mbytes_per_sec": 0, 00:09:48.360 "r_mbytes_per_sec": 0, 00:09:48.360 "w_mbytes_per_sec": 0 00:09:48.360 }, 00:09:48.360 "claimed": true, 00:09:48.360 "claim_type": "exclusive_write", 00:09:48.360 "zoned": false, 00:09:48.360 "supported_io_types": { 00:09:48.360 "read": true, 00:09:48.360 "write": true, 00:09:48.360 "unmap": true, 00:09:48.360 "flush": true, 00:09:48.360 "reset": true, 00:09:48.360 "nvme_admin": false, 00:09:48.360 "nvme_io": false, 00:09:48.360 "nvme_io_md": false, 00:09:48.360 "write_zeroes": true, 00:09:48.360 "zcopy": true, 00:09:48.360 "get_zone_info": false, 00:09:48.360 "zone_management": false, 00:09:48.360 "zone_append": false, 00:09:48.360 "compare": false, 00:09:48.360 "compare_and_write": false, 00:09:48.360 "abort": true, 00:09:48.360 "seek_hole": false, 00:09:48.360 "seek_data": false, 00:09:48.360 "copy": true, 00:09:48.360 "nvme_iov_md": false 00:09:48.360 }, 00:09:48.360 "memory_domains": [ 00:09:48.360 { 00:09:48.360 "dma_device_id": "system", 00:09:48.360 "dma_device_type": 1 00:09:48.360 }, 00:09:48.360 { 00:09:48.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.360 "dma_device_type": 2 00:09:48.360 } 00:09:48.360 ], 00:09:48.360 "driver_specific": {} 00:09:48.360 } 00:09:48.360 ] 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.360 "name": "Existed_Raid", 00:09:48.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.360 "strip_size_kb": 0, 00:09:48.360 "state": "configuring", 00:09:48.360 "raid_level": "raid1", 00:09:48.360 "superblock": false, 00:09:48.360 "num_base_bdevs": 2, 00:09:48.360 "num_base_bdevs_discovered": 1, 00:09:48.360 "num_base_bdevs_operational": 2, 00:09:48.360 "base_bdevs_list": [ 00:09:48.360 { 00:09:48.360 "name": "BaseBdev1", 00:09:48.360 "uuid": "dda5a5fb-00cb-4532-ade4-e9374f2c6b6d", 00:09:48.360 "is_configured": true, 00:09:48.360 "data_offset": 0, 00:09:48.360 "data_size": 65536 00:09:48.360 }, 00:09:48.360 { 00:09:48.360 "name": "BaseBdev2", 00:09:48.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.360 "is_configured": false, 00:09:48.360 "data_offset": 0, 00:09:48.360 "data_size": 0 00:09:48.360 } 00:09:48.360 ] 00:09:48.360 }' 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.360 04:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.928 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:48.928 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.928 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.928 [2024-11-27 04:26:45.246202] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:48.928 [2024-11-27 04:26:45.246293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:48.928 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.928 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:48.928 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.928 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.928 [2024-11-27 04:26:45.258256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.928 [2024-11-27 04:26:45.260789] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:48.928 [2024-11-27 04:26:45.260852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:48.928 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.928 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:48.928 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:48.928 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:48.928 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.929 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.929 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.929 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.929 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:48.929 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.929 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.929 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.929 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.929 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.929 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.929 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.929 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.929 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.929 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.929 "name": "Existed_Raid", 00:09:48.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.929 "strip_size_kb": 0, 00:09:48.929 "state": "configuring", 00:09:48.929 "raid_level": "raid1", 00:09:48.929 "superblock": false, 00:09:48.929 "num_base_bdevs": 2, 00:09:48.929 "num_base_bdevs_discovered": 1, 00:09:48.929 "num_base_bdevs_operational": 2, 00:09:48.929 "base_bdevs_list": [ 00:09:48.929 { 00:09:48.929 "name": "BaseBdev1", 00:09:48.929 "uuid": "dda5a5fb-00cb-4532-ade4-e9374f2c6b6d", 00:09:48.929 "is_configured": true, 00:09:48.929 "data_offset": 0, 00:09:48.929 "data_size": 65536 00:09:48.929 }, 00:09:48.929 { 00:09:48.929 "name": "BaseBdev2", 00:09:48.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.929 "is_configured": false, 00:09:48.929 "data_offset": 0, 00:09:48.929 "data_size": 0 00:09:48.929 } 00:09:48.929 ] 00:09:48.929 }' 00:09:48.929 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.929 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.187 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:49.187 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.187 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.447 [2024-11-27 04:26:45.783263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:49.447 [2024-11-27 04:26:45.783338] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:49.447 [2024-11-27 04:26:45.783348] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:49.447 BaseBdev2 00:09:49.447 [2024-11-27 04:26:45.783676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:49.447 [2024-11-27 04:26:45.783899] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:49.447 [2024-11-27 04:26:45.783915] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:49.447 [2024-11-27 04:26:45.784260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.447 [ 00:09:49.447 { 00:09:49.447 "name": "BaseBdev2", 00:09:49.447 "aliases": [ 00:09:49.447 "3964f6f1-976e-48c8-933e-d6d49867cabf" 00:09:49.447 ], 00:09:49.447 "product_name": "Malloc disk", 00:09:49.447 "block_size": 512, 00:09:49.447 "num_blocks": 65536, 00:09:49.447 "uuid": "3964f6f1-976e-48c8-933e-d6d49867cabf", 00:09:49.447 "assigned_rate_limits": { 00:09:49.447 "rw_ios_per_sec": 0, 00:09:49.447 "rw_mbytes_per_sec": 0, 00:09:49.447 "r_mbytes_per_sec": 0, 00:09:49.447 "w_mbytes_per_sec": 0 00:09:49.447 }, 00:09:49.447 "claimed": true, 00:09:49.447 "claim_type": "exclusive_write", 00:09:49.447 "zoned": false, 00:09:49.447 "supported_io_types": { 00:09:49.447 "read": true, 00:09:49.447 "write": true, 00:09:49.447 "unmap": true, 00:09:49.447 "flush": true, 00:09:49.447 "reset": true, 00:09:49.447 "nvme_admin": false, 00:09:49.447 "nvme_io": false, 00:09:49.447 "nvme_io_md": false, 00:09:49.447 "write_zeroes": true, 00:09:49.447 "zcopy": true, 00:09:49.447 "get_zone_info": false, 00:09:49.447 "zone_management": false, 00:09:49.447 "zone_append": false, 00:09:49.447 "compare": false, 00:09:49.447 "compare_and_write": false, 00:09:49.447 "abort": true, 00:09:49.447 "seek_hole": false, 00:09:49.447 "seek_data": false, 00:09:49.447 "copy": true, 00:09:49.447 "nvme_iov_md": false 00:09:49.447 }, 00:09:49.447 "memory_domains": [ 00:09:49.447 { 00:09:49.447 "dma_device_id": "system", 00:09:49.447 "dma_device_type": 1 00:09:49.447 }, 00:09:49.447 { 00:09:49.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.447 "dma_device_type": 2 00:09:49.447 } 00:09:49.447 ], 00:09:49.447 "driver_specific": {} 00:09:49.447 } 00:09:49.447 ] 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.447 "name": "Existed_Raid", 00:09:49.447 "uuid": "0d57e316-2a2e-41f1-a711-305033f763df", 00:09:49.447 "strip_size_kb": 0, 00:09:49.447 "state": "online", 00:09:49.447 "raid_level": "raid1", 00:09:49.447 "superblock": false, 00:09:49.447 "num_base_bdevs": 2, 00:09:49.447 "num_base_bdevs_discovered": 2, 00:09:49.447 "num_base_bdevs_operational": 2, 00:09:49.447 "base_bdevs_list": [ 00:09:49.447 { 00:09:49.447 "name": "BaseBdev1", 00:09:49.447 "uuid": "dda5a5fb-00cb-4532-ade4-e9374f2c6b6d", 00:09:49.447 "is_configured": true, 00:09:49.447 "data_offset": 0, 00:09:49.447 "data_size": 65536 00:09:49.447 }, 00:09:49.447 { 00:09:49.447 "name": "BaseBdev2", 00:09:49.447 "uuid": "3964f6f1-976e-48c8-933e-d6d49867cabf", 00:09:49.447 "is_configured": true, 00:09:49.447 "data_offset": 0, 00:09:49.447 "data_size": 65536 00:09:49.447 } 00:09:49.447 ] 00:09:49.447 }' 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.447 04:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.707 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:49.707 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:49.707 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:49.707 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:49.707 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:49.707 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:49.707 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:49.707 04:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.707 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:49.707 04:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.707 [2024-11-27 04:26:46.278896] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.966 04:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.966 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:49.966 "name": "Existed_Raid", 00:09:49.966 "aliases": [ 00:09:49.966 "0d57e316-2a2e-41f1-a711-305033f763df" 00:09:49.966 ], 00:09:49.966 "product_name": "Raid Volume", 00:09:49.966 "block_size": 512, 00:09:49.966 "num_blocks": 65536, 00:09:49.966 "uuid": "0d57e316-2a2e-41f1-a711-305033f763df", 00:09:49.966 "assigned_rate_limits": { 00:09:49.966 "rw_ios_per_sec": 0, 00:09:49.966 "rw_mbytes_per_sec": 0, 00:09:49.966 "r_mbytes_per_sec": 0, 00:09:49.966 "w_mbytes_per_sec": 0 00:09:49.966 }, 00:09:49.966 "claimed": false, 00:09:49.966 "zoned": false, 00:09:49.966 "supported_io_types": { 00:09:49.966 "read": true, 00:09:49.966 "write": true, 00:09:49.966 "unmap": false, 00:09:49.966 "flush": false, 00:09:49.966 "reset": true, 00:09:49.966 "nvme_admin": false, 00:09:49.966 "nvme_io": false, 00:09:49.966 "nvme_io_md": false, 00:09:49.966 "write_zeroes": true, 00:09:49.966 "zcopy": false, 00:09:49.966 "get_zone_info": false, 00:09:49.966 "zone_management": false, 00:09:49.966 "zone_append": false, 00:09:49.966 "compare": false, 00:09:49.966 "compare_and_write": false, 00:09:49.966 "abort": false, 00:09:49.966 "seek_hole": false, 00:09:49.966 "seek_data": false, 00:09:49.966 "copy": false, 00:09:49.966 "nvme_iov_md": false 00:09:49.966 }, 00:09:49.966 "memory_domains": [ 00:09:49.966 { 00:09:49.966 "dma_device_id": "system", 00:09:49.966 "dma_device_type": 1 00:09:49.966 }, 00:09:49.966 { 00:09:49.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.966 "dma_device_type": 2 00:09:49.966 }, 00:09:49.966 { 00:09:49.966 "dma_device_id": "system", 00:09:49.966 "dma_device_type": 1 00:09:49.966 }, 00:09:49.966 { 00:09:49.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.966 "dma_device_type": 2 00:09:49.966 } 00:09:49.966 ], 00:09:49.966 "driver_specific": { 00:09:49.966 "raid": { 00:09:49.966 "uuid": "0d57e316-2a2e-41f1-a711-305033f763df", 00:09:49.966 "strip_size_kb": 0, 00:09:49.966 "state": "online", 00:09:49.966 "raid_level": "raid1", 00:09:49.966 "superblock": false, 00:09:49.966 "num_base_bdevs": 2, 00:09:49.966 "num_base_bdevs_discovered": 2, 00:09:49.966 "num_base_bdevs_operational": 2, 00:09:49.966 "base_bdevs_list": [ 00:09:49.966 { 00:09:49.966 "name": "BaseBdev1", 00:09:49.966 "uuid": "dda5a5fb-00cb-4532-ade4-e9374f2c6b6d", 00:09:49.966 "is_configured": true, 00:09:49.966 "data_offset": 0, 00:09:49.966 "data_size": 65536 00:09:49.966 }, 00:09:49.966 { 00:09:49.966 "name": "BaseBdev2", 00:09:49.966 "uuid": "3964f6f1-976e-48c8-933e-d6d49867cabf", 00:09:49.966 "is_configured": true, 00:09:49.966 "data_offset": 0, 00:09:49.966 "data_size": 65536 00:09:49.966 } 00:09:49.966 ] 00:09:49.966 } 00:09:49.966 } 00:09:49.966 }' 00:09:49.966 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:49.966 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:49.966 BaseBdev2' 00:09:49.966 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.966 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:49.966 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.966 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:49.966 04:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.966 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.966 04:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.966 04:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.966 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.966 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.966 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.966 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:49.966 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.966 04:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.966 04:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.967 04:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.967 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.967 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.967 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:49.967 04:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.967 04:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.967 [2024-11-27 04:26:46.510266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:50.226 04:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.226 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:50.226 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:50.226 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:50.226 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:50.226 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:50.226 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:50.226 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.226 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.226 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.226 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.226 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:50.226 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.226 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.226 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.226 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.226 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.226 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.226 04:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.226 04:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.226 04:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.226 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.226 "name": "Existed_Raid", 00:09:50.226 "uuid": "0d57e316-2a2e-41f1-a711-305033f763df", 00:09:50.226 "strip_size_kb": 0, 00:09:50.226 "state": "online", 00:09:50.226 "raid_level": "raid1", 00:09:50.226 "superblock": false, 00:09:50.226 "num_base_bdevs": 2, 00:09:50.226 "num_base_bdevs_discovered": 1, 00:09:50.226 "num_base_bdevs_operational": 1, 00:09:50.226 "base_bdevs_list": [ 00:09:50.226 { 00:09:50.226 "name": null, 00:09:50.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.226 "is_configured": false, 00:09:50.226 "data_offset": 0, 00:09:50.226 "data_size": 65536 00:09:50.226 }, 00:09:50.226 { 00:09:50.226 "name": "BaseBdev2", 00:09:50.226 "uuid": "3964f6f1-976e-48c8-933e-d6d49867cabf", 00:09:50.226 "is_configured": true, 00:09:50.226 "data_offset": 0, 00:09:50.226 "data_size": 65536 00:09:50.226 } 00:09:50.226 ] 00:09:50.226 }' 00:09:50.226 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.226 04:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.795 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:50.795 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:50.795 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.795 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.795 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.795 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:50.795 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.795 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:50.795 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:50.795 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:50.795 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.795 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.795 [2024-11-27 04:26:47.174044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:50.795 [2024-11-27 04:26:47.174211] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:50.795 [2024-11-27 04:26:47.297934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.795 [2024-11-27 04:26:47.298016] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.795 [2024-11-27 04:26:47.298033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:50.795 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.795 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:50.795 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:50.795 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.796 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:50.796 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.796 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.796 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.796 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:50.796 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:50.796 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:50.796 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62874 00:09:50.796 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62874 ']' 00:09:50.796 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62874 00:09:50.796 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:50.796 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.796 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62874 00:09:51.058 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.058 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.058 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62874' 00:09:51.058 killing process with pid 62874 00:09:51.058 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62874 00:09:51.058 [2024-11-27 04:26:47.390546] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:51.058 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62874 00:09:51.058 [2024-11-27 04:26:47.412152] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:52.436 ************************************ 00:09:52.436 END TEST raid_state_function_test 00:09:52.436 ************************************ 00:09:52.436 00:09:52.436 real 0m5.521s 00:09:52.436 user 0m7.733s 00:09:52.436 sys 0m0.958s 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.436 04:26:48 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:09:52.436 04:26:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:52.436 04:26:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.436 04:26:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:52.436 ************************************ 00:09:52.436 START TEST raid_state_function_test_sb 00:09:52.436 ************************************ 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63127 00:09:52.436 Process raid pid: 63127 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63127' 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63127 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63127 ']' 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.436 04:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.436 [2024-11-27 04:26:48.999673] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:52.436 [2024-11-27 04:26:48.999786] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.698 [2024-11-27 04:26:49.164449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.956 [2024-11-27 04:26:49.318697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.214 [2024-11-27 04:26:49.585218] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.214 [2024-11-27 04:26:49.585281] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.473 04:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.473 04:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:53.473 04:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:53.473 04:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.473 04:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.473 [2024-11-27 04:26:49.937335] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:53.473 [2024-11-27 04:26:49.937526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:53.473 [2024-11-27 04:26:49.937549] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:53.473 [2024-11-27 04:26:49.937562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:53.473 04:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.473 04:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:53.473 04:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.473 04:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.473 04:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.473 04:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.474 04:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:53.474 04:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.474 04:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.474 04:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.474 04:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.474 04:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.474 04:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.474 04:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.474 04:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.474 04:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.474 04:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.474 "name": "Existed_Raid", 00:09:53.474 "uuid": "4164ba32-45de-425e-9d83-cb734799fdd6", 00:09:53.474 "strip_size_kb": 0, 00:09:53.474 "state": "configuring", 00:09:53.474 "raid_level": "raid1", 00:09:53.474 "superblock": true, 00:09:53.474 "num_base_bdevs": 2, 00:09:53.474 "num_base_bdevs_discovered": 0, 00:09:53.474 "num_base_bdevs_operational": 2, 00:09:53.474 "base_bdevs_list": [ 00:09:53.474 { 00:09:53.474 "name": "BaseBdev1", 00:09:53.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.474 "is_configured": false, 00:09:53.474 "data_offset": 0, 00:09:53.474 "data_size": 0 00:09:53.474 }, 00:09:53.474 { 00:09:53.474 "name": "BaseBdev2", 00:09:53.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.474 "is_configured": false, 00:09:53.474 "data_offset": 0, 00:09:53.474 "data_size": 0 00:09:53.474 } 00:09:53.474 ] 00:09:53.474 }' 00:09:53.474 04:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.474 04:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.042 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.043 [2024-11-27 04:26:50.352543] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:54.043 [2024-11-27 04:26:50.352707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.043 [2024-11-27 04:26:50.364537] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:54.043 [2024-11-27 04:26:50.364711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:54.043 [2024-11-27 04:26:50.364747] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.043 [2024-11-27 04:26:50.364781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.043 [2024-11-27 04:26:50.424562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.043 BaseBdev1 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.043 [ 00:09:54.043 { 00:09:54.043 "name": "BaseBdev1", 00:09:54.043 "aliases": [ 00:09:54.043 "dc562bbc-8464-457d-8710-f53d3ec71a12" 00:09:54.043 ], 00:09:54.043 "product_name": "Malloc disk", 00:09:54.043 "block_size": 512, 00:09:54.043 "num_blocks": 65536, 00:09:54.043 "uuid": "dc562bbc-8464-457d-8710-f53d3ec71a12", 00:09:54.043 "assigned_rate_limits": { 00:09:54.043 "rw_ios_per_sec": 0, 00:09:54.043 "rw_mbytes_per_sec": 0, 00:09:54.043 "r_mbytes_per_sec": 0, 00:09:54.043 "w_mbytes_per_sec": 0 00:09:54.043 }, 00:09:54.043 "claimed": true, 00:09:54.043 "claim_type": "exclusive_write", 00:09:54.043 "zoned": false, 00:09:54.043 "supported_io_types": { 00:09:54.043 "read": true, 00:09:54.043 "write": true, 00:09:54.043 "unmap": true, 00:09:54.043 "flush": true, 00:09:54.043 "reset": true, 00:09:54.043 "nvme_admin": false, 00:09:54.043 "nvme_io": false, 00:09:54.043 "nvme_io_md": false, 00:09:54.043 "write_zeroes": true, 00:09:54.043 "zcopy": true, 00:09:54.043 "get_zone_info": false, 00:09:54.043 "zone_management": false, 00:09:54.043 "zone_append": false, 00:09:54.043 "compare": false, 00:09:54.043 "compare_and_write": false, 00:09:54.043 "abort": true, 00:09:54.043 "seek_hole": false, 00:09:54.043 "seek_data": false, 00:09:54.043 "copy": true, 00:09:54.043 "nvme_iov_md": false 00:09:54.043 }, 00:09:54.043 "memory_domains": [ 00:09:54.043 { 00:09:54.043 "dma_device_id": "system", 00:09:54.043 "dma_device_type": 1 00:09:54.043 }, 00:09:54.043 { 00:09:54.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.043 "dma_device_type": 2 00:09:54.043 } 00:09:54.043 ], 00:09:54.043 "driver_specific": {} 00:09:54.043 } 00:09:54.043 ] 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.043 "name": "Existed_Raid", 00:09:54.043 "uuid": "fd3d67ea-e879-436f-b4ce-236b51ea9db6", 00:09:54.043 "strip_size_kb": 0, 00:09:54.043 "state": "configuring", 00:09:54.043 "raid_level": "raid1", 00:09:54.043 "superblock": true, 00:09:54.043 "num_base_bdevs": 2, 00:09:54.043 "num_base_bdevs_discovered": 1, 00:09:54.043 "num_base_bdevs_operational": 2, 00:09:54.043 "base_bdevs_list": [ 00:09:54.043 { 00:09:54.043 "name": "BaseBdev1", 00:09:54.043 "uuid": "dc562bbc-8464-457d-8710-f53d3ec71a12", 00:09:54.043 "is_configured": true, 00:09:54.043 "data_offset": 2048, 00:09:54.043 "data_size": 63488 00:09:54.043 }, 00:09:54.043 { 00:09:54.043 "name": "BaseBdev2", 00:09:54.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.043 "is_configured": false, 00:09:54.043 "data_offset": 0, 00:09:54.043 "data_size": 0 00:09:54.043 } 00:09:54.043 ] 00:09:54.043 }' 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.043 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.611 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:54.611 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.611 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.611 [2024-11-27 04:26:50.924143] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:54.611 [2024-11-27 04:26:50.924330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:54.611 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.611 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:54.611 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.611 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.611 [2024-11-27 04:26:50.936186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.611 [2024-11-27 04:26:50.938762] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.612 [2024-11-27 04:26:50.938821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.612 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.612 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:54.612 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:54.612 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:54.612 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.612 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.612 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.612 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.612 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:54.612 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.612 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.612 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.612 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.612 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.612 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.612 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.612 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.612 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.612 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.612 "name": "Existed_Raid", 00:09:54.612 "uuid": "4cde45dc-30bb-4ffe-a785-9e4595508d96", 00:09:54.612 "strip_size_kb": 0, 00:09:54.612 "state": "configuring", 00:09:54.612 "raid_level": "raid1", 00:09:54.612 "superblock": true, 00:09:54.612 "num_base_bdevs": 2, 00:09:54.612 "num_base_bdevs_discovered": 1, 00:09:54.612 "num_base_bdevs_operational": 2, 00:09:54.612 "base_bdevs_list": [ 00:09:54.612 { 00:09:54.612 "name": "BaseBdev1", 00:09:54.612 "uuid": "dc562bbc-8464-457d-8710-f53d3ec71a12", 00:09:54.612 "is_configured": true, 00:09:54.612 "data_offset": 2048, 00:09:54.612 "data_size": 63488 00:09:54.612 }, 00:09:54.612 { 00:09:54.612 "name": "BaseBdev2", 00:09:54.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.612 "is_configured": false, 00:09:54.612 "data_offset": 0, 00:09:54.612 "data_size": 0 00:09:54.612 } 00:09:54.612 ] 00:09:54.612 }' 00:09:54.612 04:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.612 04:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.870 04:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:54.870 04:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.870 04:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.129 [2024-11-27 04:26:51.490147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:55.129 [2024-11-27 04:26:51.490627] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:55.129 [2024-11-27 04:26:51.490687] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:55.129 [2024-11-27 04:26:51.491163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:55.129 BaseBdev2 00:09:55.129 [2024-11-27 04:26:51.491394] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:55.129 [2024-11-27 04:26:51.491412] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:55.129 [2024-11-27 04:26:51.491574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.129 04:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.129 04:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:55.129 04:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:55.129 04:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:55.129 04:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:55.129 04:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:55.129 04:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:55.129 04:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:55.129 04:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.129 04:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.129 04:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.129 04:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:55.129 04:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.129 04:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.129 [ 00:09:55.129 { 00:09:55.129 "name": "BaseBdev2", 00:09:55.129 "aliases": [ 00:09:55.129 "a1c66788-5d07-4208-9d99-3155e10f7082" 00:09:55.129 ], 00:09:55.129 "product_name": "Malloc disk", 00:09:55.129 "block_size": 512, 00:09:55.129 "num_blocks": 65536, 00:09:55.129 "uuid": "a1c66788-5d07-4208-9d99-3155e10f7082", 00:09:55.129 "assigned_rate_limits": { 00:09:55.129 "rw_ios_per_sec": 0, 00:09:55.129 "rw_mbytes_per_sec": 0, 00:09:55.129 "r_mbytes_per_sec": 0, 00:09:55.129 "w_mbytes_per_sec": 0 00:09:55.129 }, 00:09:55.129 "claimed": true, 00:09:55.129 "claim_type": "exclusive_write", 00:09:55.129 "zoned": false, 00:09:55.129 "supported_io_types": { 00:09:55.129 "read": true, 00:09:55.129 "write": true, 00:09:55.129 "unmap": true, 00:09:55.129 "flush": true, 00:09:55.129 "reset": true, 00:09:55.129 "nvme_admin": false, 00:09:55.129 "nvme_io": false, 00:09:55.129 "nvme_io_md": false, 00:09:55.129 "write_zeroes": true, 00:09:55.129 "zcopy": true, 00:09:55.129 "get_zone_info": false, 00:09:55.129 "zone_management": false, 00:09:55.129 "zone_append": false, 00:09:55.129 "compare": false, 00:09:55.129 "compare_and_write": false, 00:09:55.129 "abort": true, 00:09:55.129 "seek_hole": false, 00:09:55.129 "seek_data": false, 00:09:55.129 "copy": true, 00:09:55.129 "nvme_iov_md": false 00:09:55.129 }, 00:09:55.129 "memory_domains": [ 00:09:55.129 { 00:09:55.129 "dma_device_id": "system", 00:09:55.129 "dma_device_type": 1 00:09:55.129 }, 00:09:55.129 { 00:09:55.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.129 "dma_device_type": 2 00:09:55.129 } 00:09:55.129 ], 00:09:55.129 "driver_specific": {} 00:09:55.129 } 00:09:55.129 ] 00:09:55.129 04:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.129 04:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:55.129 04:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:55.129 04:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:55.129 04:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:55.129 04:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.130 04:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.130 04:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.130 04:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.130 04:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:55.130 04:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.130 04:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.130 04:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.130 04:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.130 04:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.130 04:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.130 04:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.130 04:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.130 04:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.130 04:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.130 "name": "Existed_Raid", 00:09:55.130 "uuid": "4cde45dc-30bb-4ffe-a785-9e4595508d96", 00:09:55.130 "strip_size_kb": 0, 00:09:55.130 "state": "online", 00:09:55.130 "raid_level": "raid1", 00:09:55.130 "superblock": true, 00:09:55.130 "num_base_bdevs": 2, 00:09:55.130 "num_base_bdevs_discovered": 2, 00:09:55.130 "num_base_bdevs_operational": 2, 00:09:55.130 "base_bdevs_list": [ 00:09:55.130 { 00:09:55.130 "name": "BaseBdev1", 00:09:55.130 "uuid": "dc562bbc-8464-457d-8710-f53d3ec71a12", 00:09:55.130 "is_configured": true, 00:09:55.130 "data_offset": 2048, 00:09:55.130 "data_size": 63488 00:09:55.130 }, 00:09:55.130 { 00:09:55.130 "name": "BaseBdev2", 00:09:55.130 "uuid": "a1c66788-5d07-4208-9d99-3155e10f7082", 00:09:55.130 "is_configured": true, 00:09:55.130 "data_offset": 2048, 00:09:55.130 "data_size": 63488 00:09:55.130 } 00:09:55.130 ] 00:09:55.130 }' 00:09:55.130 04:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.130 04:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.698 04:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:55.698 04:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:55.698 04:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:55.698 04:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:55.698 04:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:55.698 04:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:55.698 04:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:55.698 04:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.698 04:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.698 04:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:55.698 [2024-11-27 04:26:51.997717] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.698 04:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.698 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:55.698 "name": "Existed_Raid", 00:09:55.698 "aliases": [ 00:09:55.698 "4cde45dc-30bb-4ffe-a785-9e4595508d96" 00:09:55.698 ], 00:09:55.698 "product_name": "Raid Volume", 00:09:55.698 "block_size": 512, 00:09:55.698 "num_blocks": 63488, 00:09:55.698 "uuid": "4cde45dc-30bb-4ffe-a785-9e4595508d96", 00:09:55.698 "assigned_rate_limits": { 00:09:55.698 "rw_ios_per_sec": 0, 00:09:55.698 "rw_mbytes_per_sec": 0, 00:09:55.698 "r_mbytes_per_sec": 0, 00:09:55.698 "w_mbytes_per_sec": 0 00:09:55.698 }, 00:09:55.698 "claimed": false, 00:09:55.698 "zoned": false, 00:09:55.698 "supported_io_types": { 00:09:55.698 "read": true, 00:09:55.698 "write": true, 00:09:55.698 "unmap": false, 00:09:55.698 "flush": false, 00:09:55.698 "reset": true, 00:09:55.698 "nvme_admin": false, 00:09:55.698 "nvme_io": false, 00:09:55.698 "nvme_io_md": false, 00:09:55.698 "write_zeroes": true, 00:09:55.698 "zcopy": false, 00:09:55.698 "get_zone_info": false, 00:09:55.698 "zone_management": false, 00:09:55.698 "zone_append": false, 00:09:55.698 "compare": false, 00:09:55.698 "compare_and_write": false, 00:09:55.698 "abort": false, 00:09:55.698 "seek_hole": false, 00:09:55.698 "seek_data": false, 00:09:55.698 "copy": false, 00:09:55.698 "nvme_iov_md": false 00:09:55.698 }, 00:09:55.698 "memory_domains": [ 00:09:55.698 { 00:09:55.698 "dma_device_id": "system", 00:09:55.698 "dma_device_type": 1 00:09:55.698 }, 00:09:55.698 { 00:09:55.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.698 "dma_device_type": 2 00:09:55.698 }, 00:09:55.698 { 00:09:55.698 "dma_device_id": "system", 00:09:55.698 "dma_device_type": 1 00:09:55.698 }, 00:09:55.698 { 00:09:55.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.698 "dma_device_type": 2 00:09:55.698 } 00:09:55.698 ], 00:09:55.698 "driver_specific": { 00:09:55.698 "raid": { 00:09:55.698 "uuid": "4cde45dc-30bb-4ffe-a785-9e4595508d96", 00:09:55.698 "strip_size_kb": 0, 00:09:55.698 "state": "online", 00:09:55.698 "raid_level": "raid1", 00:09:55.698 "superblock": true, 00:09:55.698 "num_base_bdevs": 2, 00:09:55.698 "num_base_bdevs_discovered": 2, 00:09:55.698 "num_base_bdevs_operational": 2, 00:09:55.698 "base_bdevs_list": [ 00:09:55.698 { 00:09:55.698 "name": "BaseBdev1", 00:09:55.698 "uuid": "dc562bbc-8464-457d-8710-f53d3ec71a12", 00:09:55.698 "is_configured": true, 00:09:55.698 "data_offset": 2048, 00:09:55.698 "data_size": 63488 00:09:55.698 }, 00:09:55.698 { 00:09:55.698 "name": "BaseBdev2", 00:09:55.698 "uuid": "a1c66788-5d07-4208-9d99-3155e10f7082", 00:09:55.698 "is_configured": true, 00:09:55.698 "data_offset": 2048, 00:09:55.698 "data_size": 63488 00:09:55.698 } 00:09:55.698 ] 00:09:55.698 } 00:09:55.698 } 00:09:55.698 }' 00:09:55.698 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:55.698 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:55.698 BaseBdev2' 00:09:55.698 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.698 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:55.698 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.698 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:55.698 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.698 04:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.698 04:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.698 04:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.698 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.698 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.698 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.698 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:55.698 04:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.698 04:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.698 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.698 04:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.698 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.698 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.698 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:55.698 04:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.698 04:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.698 [2024-11-27 04:26:52.245160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:55.959 04:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.959 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:55.959 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:55.959 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:55.959 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:55.959 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:55.959 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:55.959 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.959 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.959 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.959 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.959 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:55.959 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.959 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.959 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.959 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.959 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.959 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.959 04:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.959 04:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.959 04:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.959 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.959 "name": "Existed_Raid", 00:09:55.959 "uuid": "4cde45dc-30bb-4ffe-a785-9e4595508d96", 00:09:55.959 "strip_size_kb": 0, 00:09:55.959 "state": "online", 00:09:55.959 "raid_level": "raid1", 00:09:55.959 "superblock": true, 00:09:55.959 "num_base_bdevs": 2, 00:09:55.959 "num_base_bdevs_discovered": 1, 00:09:55.959 "num_base_bdevs_operational": 1, 00:09:55.959 "base_bdevs_list": [ 00:09:55.959 { 00:09:55.959 "name": null, 00:09:55.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.959 "is_configured": false, 00:09:55.959 "data_offset": 0, 00:09:55.959 "data_size": 63488 00:09:55.959 }, 00:09:55.959 { 00:09:55.959 "name": "BaseBdev2", 00:09:55.959 "uuid": "a1c66788-5d07-4208-9d99-3155e10f7082", 00:09:55.959 "is_configured": true, 00:09:55.959 "data_offset": 2048, 00:09:55.959 "data_size": 63488 00:09:55.959 } 00:09:55.959 ] 00:09:55.959 }' 00:09:55.959 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.959 04:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.546 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:56.546 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:56.546 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.546 04:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.546 04:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.546 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:56.546 04:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.546 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:56.546 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:56.546 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:56.546 04:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.546 04:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.546 [2024-11-27 04:26:52.871252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:56.546 [2024-11-27 04:26:52.871502] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:56.546 [2024-11-27 04:26:52.994269] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.546 [2024-11-27 04:26:52.994465] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:56.546 [2024-11-27 04:26:52.994489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:56.546 04:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.546 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:56.546 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:56.546 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:56.546 04:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.546 04:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.546 04:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.546 04:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.546 04:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:56.546 04:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:56.546 04:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:56.546 04:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63127 00:09:56.546 04:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63127 ']' 00:09:56.546 04:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63127 00:09:56.546 04:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:56.546 04:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.546 04:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63127 00:09:56.546 killing process with pid 63127 00:09:56.546 04:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.546 04:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.546 04:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63127' 00:09:56.546 04:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63127 00:09:56.546 [2024-11-27 04:26:53.077433] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:56.546 04:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63127 00:09:56.546 [2024-11-27 04:26:53.099551] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:57.937 04:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:57.937 00:09:57.937 real 0m5.555s 00:09:57.937 user 0m7.829s 00:09:57.937 sys 0m0.946s 00:09:57.937 04:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.937 ************************************ 00:09:57.937 END TEST raid_state_function_test_sb 00:09:57.937 ************************************ 00:09:57.937 04:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.937 04:26:54 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:09:57.937 04:26:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:57.937 04:26:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.937 04:26:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:58.196 ************************************ 00:09:58.196 START TEST raid_superblock_test 00:09:58.196 ************************************ 00:09:58.196 04:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:09:58.196 04:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:58.196 04:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:58.196 04:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:58.196 04:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:58.196 04:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:58.196 04:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:58.196 04:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:58.196 04:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:58.196 04:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:58.196 04:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:58.196 04:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:58.196 04:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:58.196 04:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:58.196 04:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:58.196 04:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:58.196 04:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63390 00:09:58.196 04:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:58.196 04:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63390 00:09:58.196 04:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63390 ']' 00:09:58.196 04:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.196 04:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.196 04:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.196 04:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.196 04:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.196 [2024-11-27 04:26:54.631092] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:58.196 [2024-11-27 04:26:54.631231] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63390 ] 00:09:58.454 [2024-11-27 04:26:54.815979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.454 [2024-11-27 04:26:54.961870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.713 [2024-11-27 04:26:55.213382] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.713 [2024-11-27 04:26:55.213471] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.984 malloc1 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.984 [2024-11-27 04:26:55.538368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:58.984 [2024-11-27 04:26:55.538462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.984 [2024-11-27 04:26:55.538500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:58.984 [2024-11-27 04:26:55.538511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.984 [2024-11-27 04:26:55.541489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.984 [2024-11-27 04:26:55.541621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:58.984 pt1 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.984 04:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.243 malloc2 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.243 [2024-11-27 04:26:55.608185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:59.243 [2024-11-27 04:26:55.608382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.243 [2024-11-27 04:26:55.608444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:59.243 [2024-11-27 04:26:55.608495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.243 [2024-11-27 04:26:55.611546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.243 [2024-11-27 04:26:55.611665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:59.243 pt2 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.243 [2024-11-27 04:26:55.620577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:59.243 [2024-11-27 04:26:55.623247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:59.243 [2024-11-27 04:26:55.623579] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:59.243 [2024-11-27 04:26:55.623642] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:59.243 [2024-11-27 04:26:55.624059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:59.243 [2024-11-27 04:26:55.624361] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:59.243 [2024-11-27 04:26:55.624421] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:59.243 [2024-11-27 04:26:55.624768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.243 "name": "raid_bdev1", 00:09:59.243 "uuid": "4b18d916-dcd7-4ddc-b853-dfcd2f4c980e", 00:09:59.243 "strip_size_kb": 0, 00:09:59.243 "state": "online", 00:09:59.243 "raid_level": "raid1", 00:09:59.243 "superblock": true, 00:09:59.243 "num_base_bdevs": 2, 00:09:59.243 "num_base_bdevs_discovered": 2, 00:09:59.243 "num_base_bdevs_operational": 2, 00:09:59.243 "base_bdevs_list": [ 00:09:59.243 { 00:09:59.243 "name": "pt1", 00:09:59.243 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:59.243 "is_configured": true, 00:09:59.243 "data_offset": 2048, 00:09:59.243 "data_size": 63488 00:09:59.243 }, 00:09:59.243 { 00:09:59.243 "name": "pt2", 00:09:59.243 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:59.243 "is_configured": true, 00:09:59.243 "data_offset": 2048, 00:09:59.243 "data_size": 63488 00:09:59.243 } 00:09:59.243 ] 00:09:59.243 }' 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.243 04:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.501 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:59.501 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:59.501 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:59.501 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:59.501 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:59.501 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:59.501 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:59.501 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:59.501 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.501 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.501 [2024-11-27 04:26:56.076602] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.759 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.759 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:59.759 "name": "raid_bdev1", 00:09:59.759 "aliases": [ 00:09:59.759 "4b18d916-dcd7-4ddc-b853-dfcd2f4c980e" 00:09:59.759 ], 00:09:59.759 "product_name": "Raid Volume", 00:09:59.759 "block_size": 512, 00:09:59.759 "num_blocks": 63488, 00:09:59.759 "uuid": "4b18d916-dcd7-4ddc-b853-dfcd2f4c980e", 00:09:59.759 "assigned_rate_limits": { 00:09:59.759 "rw_ios_per_sec": 0, 00:09:59.759 "rw_mbytes_per_sec": 0, 00:09:59.759 "r_mbytes_per_sec": 0, 00:09:59.759 "w_mbytes_per_sec": 0 00:09:59.759 }, 00:09:59.759 "claimed": false, 00:09:59.759 "zoned": false, 00:09:59.759 "supported_io_types": { 00:09:59.759 "read": true, 00:09:59.759 "write": true, 00:09:59.759 "unmap": false, 00:09:59.759 "flush": false, 00:09:59.759 "reset": true, 00:09:59.759 "nvme_admin": false, 00:09:59.759 "nvme_io": false, 00:09:59.759 "nvme_io_md": false, 00:09:59.759 "write_zeroes": true, 00:09:59.759 "zcopy": false, 00:09:59.759 "get_zone_info": false, 00:09:59.759 "zone_management": false, 00:09:59.759 "zone_append": false, 00:09:59.759 "compare": false, 00:09:59.759 "compare_and_write": false, 00:09:59.759 "abort": false, 00:09:59.759 "seek_hole": false, 00:09:59.759 "seek_data": false, 00:09:59.759 "copy": false, 00:09:59.759 "nvme_iov_md": false 00:09:59.759 }, 00:09:59.759 "memory_domains": [ 00:09:59.759 { 00:09:59.759 "dma_device_id": "system", 00:09:59.759 "dma_device_type": 1 00:09:59.759 }, 00:09:59.759 { 00:09:59.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.759 "dma_device_type": 2 00:09:59.759 }, 00:09:59.759 { 00:09:59.759 "dma_device_id": "system", 00:09:59.759 "dma_device_type": 1 00:09:59.759 }, 00:09:59.759 { 00:09:59.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.759 "dma_device_type": 2 00:09:59.759 } 00:09:59.759 ], 00:09:59.759 "driver_specific": { 00:09:59.759 "raid": { 00:09:59.759 "uuid": "4b18d916-dcd7-4ddc-b853-dfcd2f4c980e", 00:09:59.759 "strip_size_kb": 0, 00:09:59.759 "state": "online", 00:09:59.759 "raid_level": "raid1", 00:09:59.759 "superblock": true, 00:09:59.759 "num_base_bdevs": 2, 00:09:59.759 "num_base_bdevs_discovered": 2, 00:09:59.759 "num_base_bdevs_operational": 2, 00:09:59.759 "base_bdevs_list": [ 00:09:59.759 { 00:09:59.759 "name": "pt1", 00:09:59.759 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:59.759 "is_configured": true, 00:09:59.759 "data_offset": 2048, 00:09:59.759 "data_size": 63488 00:09:59.759 }, 00:09:59.759 { 00:09:59.759 "name": "pt2", 00:09:59.759 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:59.759 "is_configured": true, 00:09:59.759 "data_offset": 2048, 00:09:59.759 "data_size": 63488 00:09:59.759 } 00:09:59.759 ] 00:09:59.759 } 00:09:59.759 } 00:09:59.759 }' 00:09:59.759 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:59.759 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:59.759 pt2' 00:09:59.759 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.759 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:59.759 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.759 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:59.759 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.759 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.759 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.759 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.759 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.759 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.759 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.759 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:59.759 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.760 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.760 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.760 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.760 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.760 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.760 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:59.760 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.760 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.760 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:59.760 [2024-11-27 04:26:56.316606] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.760 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.018 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4b18d916-dcd7-4ddc-b853-dfcd2f4c980e 00:10:00.018 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4b18d916-dcd7-4ddc-b853-dfcd2f4c980e ']' 00:10:00.018 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:00.018 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.018 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.018 [2024-11-27 04:26:56.364199] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:00.018 [2024-11-27 04:26:56.364368] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:00.018 [2024-11-27 04:26:56.364521] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.018 [2024-11-27 04:26:56.364603] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:00.018 [2024-11-27 04:26:56.364620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:00.018 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.018 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.018 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:00.018 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.018 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.018 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.018 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:00.018 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:00.018 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.019 [2024-11-27 04:26:56.504278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:00.019 [2024-11-27 04:26:56.506950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:00.019 [2024-11-27 04:26:56.507144] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:00.019 [2024-11-27 04:26:56.507270] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:00.019 [2024-11-27 04:26:56.507332] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:00.019 [2024-11-27 04:26:56.507386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:00.019 request: 00:10:00.019 { 00:10:00.019 "name": "raid_bdev1", 00:10:00.019 "raid_level": "raid1", 00:10:00.019 "base_bdevs": [ 00:10:00.019 "malloc1", 00:10:00.019 "malloc2" 00:10:00.019 ], 00:10:00.019 "superblock": false, 00:10:00.019 "method": "bdev_raid_create", 00:10:00.019 "req_id": 1 00:10:00.019 } 00:10:00.019 Got JSON-RPC error response 00:10:00.019 response: 00:10:00.019 { 00:10:00.019 "code": -17, 00:10:00.019 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:00.019 } 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.019 [2024-11-27 04:26:56.564266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:00.019 [2024-11-27 04:26:56.564683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.019 [2024-11-27 04:26:56.564762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:00.019 [2024-11-27 04:26:56.564864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.019 [2024-11-27 04:26:56.568161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.019 [2024-11-27 04:26:56.568293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:00.019 [2024-11-27 04:26:56.568636] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:00.019 [2024-11-27 04:26:56.568853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:00.019 pt1 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.019 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.279 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.279 "name": "raid_bdev1", 00:10:00.279 "uuid": "4b18d916-dcd7-4ddc-b853-dfcd2f4c980e", 00:10:00.279 "strip_size_kb": 0, 00:10:00.279 "state": "configuring", 00:10:00.279 "raid_level": "raid1", 00:10:00.279 "superblock": true, 00:10:00.279 "num_base_bdevs": 2, 00:10:00.279 "num_base_bdevs_discovered": 1, 00:10:00.279 "num_base_bdevs_operational": 2, 00:10:00.279 "base_bdevs_list": [ 00:10:00.279 { 00:10:00.279 "name": "pt1", 00:10:00.279 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:00.279 "is_configured": true, 00:10:00.279 "data_offset": 2048, 00:10:00.279 "data_size": 63488 00:10:00.279 }, 00:10:00.279 { 00:10:00.279 "name": null, 00:10:00.279 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:00.279 "is_configured": false, 00:10:00.279 "data_offset": 2048, 00:10:00.279 "data_size": 63488 00:10:00.279 } 00:10:00.279 ] 00:10:00.279 }' 00:10:00.279 04:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.279 04:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.544 [2024-11-27 04:26:57.044266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:00.544 [2024-11-27 04:26:57.044508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.544 [2024-11-27 04:26:57.044576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:00.544 [2024-11-27 04:26:57.044617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.544 [2024-11-27 04:26:57.045315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.544 [2024-11-27 04:26:57.045389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:00.544 [2024-11-27 04:26:57.045539] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:00.544 [2024-11-27 04:26:57.045606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:00.544 [2024-11-27 04:26:57.045793] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:00.544 [2024-11-27 04:26:57.045841] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:00.544 [2024-11-27 04:26:57.046197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:00.544 [2024-11-27 04:26:57.046437] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:00.544 [2024-11-27 04:26:57.046479] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:00.544 [2024-11-27 04:26:57.046708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.544 pt2 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.544 "name": "raid_bdev1", 00:10:00.544 "uuid": "4b18d916-dcd7-4ddc-b853-dfcd2f4c980e", 00:10:00.544 "strip_size_kb": 0, 00:10:00.544 "state": "online", 00:10:00.544 "raid_level": "raid1", 00:10:00.544 "superblock": true, 00:10:00.544 "num_base_bdevs": 2, 00:10:00.544 "num_base_bdevs_discovered": 2, 00:10:00.544 "num_base_bdevs_operational": 2, 00:10:00.544 "base_bdevs_list": [ 00:10:00.544 { 00:10:00.544 "name": "pt1", 00:10:00.544 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:00.544 "is_configured": true, 00:10:00.544 "data_offset": 2048, 00:10:00.544 "data_size": 63488 00:10:00.544 }, 00:10:00.544 { 00:10:00.544 "name": "pt2", 00:10:00.544 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:00.544 "is_configured": true, 00:10:00.544 "data_offset": 2048, 00:10:00.544 "data_size": 63488 00:10:00.544 } 00:10:00.544 ] 00:10:00.544 }' 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.544 04:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.111 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:01.111 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:01.111 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:01.111 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:01.111 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:01.111 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:01.111 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:01.111 04:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.111 04:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.111 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:01.111 [2024-11-27 04:26:57.508302] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.111 04:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.111 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:01.111 "name": "raid_bdev1", 00:10:01.111 "aliases": [ 00:10:01.111 "4b18d916-dcd7-4ddc-b853-dfcd2f4c980e" 00:10:01.111 ], 00:10:01.111 "product_name": "Raid Volume", 00:10:01.111 "block_size": 512, 00:10:01.111 "num_blocks": 63488, 00:10:01.111 "uuid": "4b18d916-dcd7-4ddc-b853-dfcd2f4c980e", 00:10:01.111 "assigned_rate_limits": { 00:10:01.111 "rw_ios_per_sec": 0, 00:10:01.111 "rw_mbytes_per_sec": 0, 00:10:01.111 "r_mbytes_per_sec": 0, 00:10:01.111 "w_mbytes_per_sec": 0 00:10:01.111 }, 00:10:01.111 "claimed": false, 00:10:01.111 "zoned": false, 00:10:01.112 "supported_io_types": { 00:10:01.112 "read": true, 00:10:01.112 "write": true, 00:10:01.112 "unmap": false, 00:10:01.112 "flush": false, 00:10:01.112 "reset": true, 00:10:01.112 "nvme_admin": false, 00:10:01.112 "nvme_io": false, 00:10:01.112 "nvme_io_md": false, 00:10:01.112 "write_zeroes": true, 00:10:01.112 "zcopy": false, 00:10:01.112 "get_zone_info": false, 00:10:01.112 "zone_management": false, 00:10:01.112 "zone_append": false, 00:10:01.112 "compare": false, 00:10:01.112 "compare_and_write": false, 00:10:01.112 "abort": false, 00:10:01.112 "seek_hole": false, 00:10:01.112 "seek_data": false, 00:10:01.112 "copy": false, 00:10:01.112 "nvme_iov_md": false 00:10:01.112 }, 00:10:01.112 "memory_domains": [ 00:10:01.112 { 00:10:01.112 "dma_device_id": "system", 00:10:01.112 "dma_device_type": 1 00:10:01.112 }, 00:10:01.112 { 00:10:01.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.112 "dma_device_type": 2 00:10:01.112 }, 00:10:01.112 { 00:10:01.112 "dma_device_id": "system", 00:10:01.112 "dma_device_type": 1 00:10:01.112 }, 00:10:01.112 { 00:10:01.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.112 "dma_device_type": 2 00:10:01.112 } 00:10:01.112 ], 00:10:01.112 "driver_specific": { 00:10:01.112 "raid": { 00:10:01.112 "uuid": "4b18d916-dcd7-4ddc-b853-dfcd2f4c980e", 00:10:01.112 "strip_size_kb": 0, 00:10:01.112 "state": "online", 00:10:01.112 "raid_level": "raid1", 00:10:01.112 "superblock": true, 00:10:01.112 "num_base_bdevs": 2, 00:10:01.112 "num_base_bdevs_discovered": 2, 00:10:01.112 "num_base_bdevs_operational": 2, 00:10:01.112 "base_bdevs_list": [ 00:10:01.112 { 00:10:01.112 "name": "pt1", 00:10:01.112 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:01.112 "is_configured": true, 00:10:01.112 "data_offset": 2048, 00:10:01.112 "data_size": 63488 00:10:01.112 }, 00:10:01.112 { 00:10:01.112 "name": "pt2", 00:10:01.112 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:01.112 "is_configured": true, 00:10:01.112 "data_offset": 2048, 00:10:01.112 "data_size": 63488 00:10:01.112 } 00:10:01.112 ] 00:10:01.112 } 00:10:01.112 } 00:10:01.112 }' 00:10:01.112 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.112 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:01.112 pt2' 00:10:01.112 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.112 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:01.112 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.112 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:01.112 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.112 04:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.112 04:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.112 04:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:01.372 [2024-11-27 04:26:57.763854] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4b18d916-dcd7-4ddc-b853-dfcd2f4c980e '!=' 4b18d916-dcd7-4ddc-b853-dfcd2f4c980e ']' 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.372 [2024-11-27 04:26:57.811539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.372 "name": "raid_bdev1", 00:10:01.372 "uuid": "4b18d916-dcd7-4ddc-b853-dfcd2f4c980e", 00:10:01.372 "strip_size_kb": 0, 00:10:01.372 "state": "online", 00:10:01.372 "raid_level": "raid1", 00:10:01.372 "superblock": true, 00:10:01.372 "num_base_bdevs": 2, 00:10:01.372 "num_base_bdevs_discovered": 1, 00:10:01.372 "num_base_bdevs_operational": 1, 00:10:01.372 "base_bdevs_list": [ 00:10:01.372 { 00:10:01.372 "name": null, 00:10:01.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.372 "is_configured": false, 00:10:01.372 "data_offset": 0, 00:10:01.372 "data_size": 63488 00:10:01.372 }, 00:10:01.372 { 00:10:01.372 "name": "pt2", 00:10:01.372 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:01.372 "is_configured": true, 00:10:01.372 "data_offset": 2048, 00:10:01.372 "data_size": 63488 00:10:01.372 } 00:10:01.372 ] 00:10:01.372 }' 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.372 04:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.939 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:01.939 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.939 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.939 [2024-11-27 04:26:58.270754] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:01.939 [2024-11-27 04:26:58.270816] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.939 [2024-11-27 04:26:58.270945] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.939 [2024-11-27 04:26:58.271010] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:01.939 [2024-11-27 04:26:58.271026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:01.939 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.939 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.939 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.939 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.939 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:01.939 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.939 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:01.939 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:01.939 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:01.939 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:01.939 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:01.939 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.939 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.939 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.939 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:01.939 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:01.939 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:01.939 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:01.939 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:10:01.940 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:01.940 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.940 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.940 [2024-11-27 04:26:58.342624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:01.940 [2024-11-27 04:26:58.342826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.940 [2024-11-27 04:26:58.342874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:01.940 [2024-11-27 04:26:58.342917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.940 [2024-11-27 04:26:58.346069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.940 [2024-11-27 04:26:58.346217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:01.940 [2024-11-27 04:26:58.346381] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:01.940 [2024-11-27 04:26:58.346497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:01.940 [2024-11-27 04:26:58.346680] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:01.940 [2024-11-27 04:26:58.346728] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:01.940 [2024-11-27 04:26:58.347080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:01.940 [2024-11-27 04:26:58.347349] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:01.940 [2024-11-27 04:26:58.347399] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:01.940 [2024-11-27 04:26:58.347702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.940 pt2 00:10:01.940 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.940 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:01.940 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:01.940 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.940 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.940 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.940 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:01.940 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.940 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.940 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.940 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.940 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:01.940 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.940 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.940 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.940 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.940 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.940 "name": "raid_bdev1", 00:10:01.940 "uuid": "4b18d916-dcd7-4ddc-b853-dfcd2f4c980e", 00:10:01.940 "strip_size_kb": 0, 00:10:01.940 "state": "online", 00:10:01.940 "raid_level": "raid1", 00:10:01.940 "superblock": true, 00:10:01.940 "num_base_bdevs": 2, 00:10:01.940 "num_base_bdevs_discovered": 1, 00:10:01.940 "num_base_bdevs_operational": 1, 00:10:01.940 "base_bdevs_list": [ 00:10:01.940 { 00:10:01.940 "name": null, 00:10:01.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.940 "is_configured": false, 00:10:01.940 "data_offset": 2048, 00:10:01.940 "data_size": 63488 00:10:01.940 }, 00:10:01.940 { 00:10:01.940 "name": "pt2", 00:10:01.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:01.940 "is_configured": true, 00:10:01.940 "data_offset": 2048, 00:10:01.940 "data_size": 63488 00:10:01.940 } 00:10:01.940 ] 00:10:01.940 }' 00:10:01.940 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.940 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.507 [2024-11-27 04:26:58.814107] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:02.507 [2024-11-27 04:26:58.814170] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.507 [2024-11-27 04:26:58.814293] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.507 [2024-11-27 04:26:58.814364] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.507 [2024-11-27 04:26:58.814377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.507 [2024-11-27 04:26:58.874040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:02.507 [2024-11-27 04:26:58.874249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.507 [2024-11-27 04:26:58.874302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:10:02.507 [2024-11-27 04:26:58.874347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.507 [2024-11-27 04:26:58.877472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.507 [2024-11-27 04:26:58.877597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:02.507 [2024-11-27 04:26:58.877765] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:02.507 [2024-11-27 04:26:58.877859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:02.507 [2024-11-27 04:26:58.878117] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:02.507 [2024-11-27 04:26:58.878184] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:02.507 [2024-11-27 04:26:58.878243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:02.507 [2024-11-27 04:26:58.878363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:02.507 [2024-11-27 04:26:58.878558] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:02.507 [2024-11-27 04:26:58.878606] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:02.507 pt1 00:10:02.507 [2024-11-27 04:26:58.878982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:02.507 [2024-11-27 04:26:58.879199] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:02.507 [2024-11-27 04:26:58.879218] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.507 [2024-11-27 04:26:58.879401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.507 "name": "raid_bdev1", 00:10:02.507 "uuid": "4b18d916-dcd7-4ddc-b853-dfcd2f4c980e", 00:10:02.507 "strip_size_kb": 0, 00:10:02.507 "state": "online", 00:10:02.507 "raid_level": "raid1", 00:10:02.507 "superblock": true, 00:10:02.507 "num_base_bdevs": 2, 00:10:02.507 "num_base_bdevs_discovered": 1, 00:10:02.507 "num_base_bdevs_operational": 1, 00:10:02.507 "base_bdevs_list": [ 00:10:02.507 { 00:10:02.507 "name": null, 00:10:02.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.507 "is_configured": false, 00:10:02.507 "data_offset": 2048, 00:10:02.507 "data_size": 63488 00:10:02.507 }, 00:10:02.507 { 00:10:02.507 "name": "pt2", 00:10:02.507 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:02.507 "is_configured": true, 00:10:02.507 "data_offset": 2048, 00:10:02.507 "data_size": 63488 00:10:02.507 } 00:10:02.507 ] 00:10:02.507 }' 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.507 04:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.774 04:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:02.774 04:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.774 04:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.774 04:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:02.774 04:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.036 04:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:03.036 04:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:03.036 04:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.036 04:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:03.036 04:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.036 [2024-11-27 04:26:59.373699] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.036 04:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.036 04:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4b18d916-dcd7-4ddc-b853-dfcd2f4c980e '!=' 4b18d916-dcd7-4ddc-b853-dfcd2f4c980e ']' 00:10:03.036 04:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63390 00:10:03.036 04:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63390 ']' 00:10:03.036 04:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63390 00:10:03.036 04:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:03.036 04:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:03.036 04:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63390 00:10:03.036 killing process with pid 63390 00:10:03.036 04:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:03.036 04:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:03.036 04:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63390' 00:10:03.036 04:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63390 00:10:03.036 [2024-11-27 04:26:59.429938] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:03.036 04:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63390 00:10:03.036 [2024-11-27 04:26:59.430062] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:03.036 [2024-11-27 04:26:59.430141] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:03.036 [2024-11-27 04:26:59.430166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:03.294 [2024-11-27 04:26:59.680051] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:04.684 04:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:04.684 00:10:04.684 real 0m6.455s 00:10:04.684 user 0m9.574s 00:10:04.684 sys 0m1.148s 00:10:04.684 04:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.684 04:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.684 ************************************ 00:10:04.684 END TEST raid_superblock_test 00:10:04.684 ************************************ 00:10:04.684 04:27:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:10:04.684 04:27:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:04.684 04:27:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.684 04:27:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:04.684 ************************************ 00:10:04.684 START TEST raid_read_error_test 00:10:04.684 ************************************ 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4ronTSIu0l 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63720 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:04.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63720 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63720 ']' 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.684 04:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.684 [2024-11-27 04:27:01.148195] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:04.684 [2024-11-27 04:27:01.148909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63720 ] 00:10:04.949 [2024-11-27 04:27:01.314665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.949 [2024-11-27 04:27:01.456651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.206 [2024-11-27 04:27:01.698376] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.207 [2024-11-27 04:27:01.698511] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.774 BaseBdev1_malloc 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.774 true 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.774 [2024-11-27 04:27:02.150916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:05.774 [2024-11-27 04:27:02.150988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.774 [2024-11-27 04:27:02.151014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:05.774 [2024-11-27 04:27:02.151027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.774 [2024-11-27 04:27:02.153589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.774 [2024-11-27 04:27:02.153698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:05.774 BaseBdev1 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.774 BaseBdev2_malloc 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.774 true 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.774 [2024-11-27 04:27:02.223870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:05.774 [2024-11-27 04:27:02.223961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.774 [2024-11-27 04:27:02.223987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:05.774 [2024-11-27 04:27:02.223999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.774 [2024-11-27 04:27:02.226580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.774 [2024-11-27 04:27:02.226698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:05.774 BaseBdev2 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.774 [2024-11-27 04:27:02.235924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.774 [2024-11-27 04:27:02.238140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.774 [2024-11-27 04:27:02.238383] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:05.774 [2024-11-27 04:27:02.238401] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:05.774 [2024-11-27 04:27:02.238703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:05.774 [2024-11-27 04:27:02.238908] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:05.774 [2024-11-27 04:27:02.238920] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:05.774 [2024-11-27 04:27:02.239138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.774 04:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.774 "name": "raid_bdev1", 00:10:05.774 "uuid": "3f999ef2-0558-429e-b106-a7a8ec791908", 00:10:05.774 "strip_size_kb": 0, 00:10:05.774 "state": "online", 00:10:05.774 "raid_level": "raid1", 00:10:05.774 "superblock": true, 00:10:05.774 "num_base_bdevs": 2, 00:10:05.774 "num_base_bdevs_discovered": 2, 00:10:05.774 "num_base_bdevs_operational": 2, 00:10:05.774 "base_bdevs_list": [ 00:10:05.774 { 00:10:05.774 "name": "BaseBdev1", 00:10:05.774 "uuid": "8e33dbc2-a787-56cc-9e08-6456fc934db1", 00:10:05.774 "is_configured": true, 00:10:05.774 "data_offset": 2048, 00:10:05.774 "data_size": 63488 00:10:05.775 }, 00:10:05.775 { 00:10:05.775 "name": "BaseBdev2", 00:10:05.775 "uuid": "8620de25-b7f1-54ab-b58a-1c028799d34d", 00:10:05.775 "is_configured": true, 00:10:05.775 "data_offset": 2048, 00:10:05.775 "data_size": 63488 00:10:05.775 } 00:10:05.775 ] 00:10:05.775 }' 00:10:05.775 04:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.775 04:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.341 04:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:06.341 04:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:06.341 [2024-11-27 04:27:02.800368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:07.282 04:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:07.282 04:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.282 04:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.282 04:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.282 04:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:07.282 04:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:07.282 04:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:07.282 04:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:07.282 04:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:07.282 04:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:07.282 04:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.282 04:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.282 04:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.282 04:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:07.282 04:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.282 04:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.282 04:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.282 04:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.282 04:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.282 04:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.282 04:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.282 04:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:07.282 04:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.282 04:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.282 "name": "raid_bdev1", 00:10:07.282 "uuid": "3f999ef2-0558-429e-b106-a7a8ec791908", 00:10:07.282 "strip_size_kb": 0, 00:10:07.282 "state": "online", 00:10:07.282 "raid_level": "raid1", 00:10:07.282 "superblock": true, 00:10:07.282 "num_base_bdevs": 2, 00:10:07.282 "num_base_bdevs_discovered": 2, 00:10:07.282 "num_base_bdevs_operational": 2, 00:10:07.282 "base_bdevs_list": [ 00:10:07.282 { 00:10:07.282 "name": "BaseBdev1", 00:10:07.282 "uuid": "8e33dbc2-a787-56cc-9e08-6456fc934db1", 00:10:07.282 "is_configured": true, 00:10:07.282 "data_offset": 2048, 00:10:07.282 "data_size": 63488 00:10:07.282 }, 00:10:07.282 { 00:10:07.282 "name": "BaseBdev2", 00:10:07.282 "uuid": "8620de25-b7f1-54ab-b58a-1c028799d34d", 00:10:07.282 "is_configured": true, 00:10:07.282 "data_offset": 2048, 00:10:07.282 "data_size": 63488 00:10:07.282 } 00:10:07.282 ] 00:10:07.282 }' 00:10:07.282 04:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.282 04:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.542 04:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:07.542 04:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.542 04:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.542 [2024-11-27 04:27:04.104872] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:07.542 [2024-11-27 04:27:04.104910] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.542 [2024-11-27 04:27:04.107885] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.542 [2024-11-27 04:27:04.107935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.542 [2024-11-27 04:27:04.108036] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.542 [2024-11-27 04:27:04.108050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:07.542 { 00:10:07.542 "results": [ 00:10:07.542 { 00:10:07.542 "job": "raid_bdev1", 00:10:07.542 "core_mask": "0x1", 00:10:07.542 "workload": "randrw", 00:10:07.542 "percentage": 50, 00:10:07.542 "status": "finished", 00:10:07.542 "queue_depth": 1, 00:10:07.542 "io_size": 131072, 00:10:07.542 "runtime": 1.305011, 00:10:07.542 "iops": 14833.591440991686, 00:10:07.542 "mibps": 1854.1989301239607, 00:10:07.542 "io_failed": 0, 00:10:07.542 "io_timeout": 0, 00:10:07.542 "avg_latency_us": 64.14694217120665, 00:10:07.542 "min_latency_us": 27.053275109170304, 00:10:07.542 "max_latency_us": 1888.810480349345 00:10:07.542 } 00:10:07.542 ], 00:10:07.542 "core_count": 1 00:10:07.542 } 00:10:07.542 04:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.542 04:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63720 00:10:07.542 04:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63720 ']' 00:10:07.542 04:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63720 00:10:07.542 04:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:07.542 04:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.542 04:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63720 00:10:07.802 killing process with pid 63720 00:10:07.802 04:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.802 04:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.802 04:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63720' 00:10:07.802 04:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63720 00:10:07.802 [2024-11-27 04:27:04.149271] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:07.802 04:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63720 00:10:07.802 [2024-11-27 04:27:04.290929] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:09.228 04:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4ronTSIu0l 00:10:09.228 04:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:09.228 04:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:09.228 04:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:09.228 04:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:09.228 04:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:09.228 04:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:09.228 04:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:09.228 00:10:09.228 real 0m4.599s 00:10:09.228 user 0m5.507s 00:10:09.228 sys 0m0.556s 00:10:09.228 04:27:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.228 04:27:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.228 ************************************ 00:10:09.228 END TEST raid_read_error_test 00:10:09.228 ************************************ 00:10:09.228 04:27:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:10:09.228 04:27:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:09.228 04:27:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.228 04:27:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:09.228 ************************************ 00:10:09.228 START TEST raid_write_error_test 00:10:09.228 ************************************ 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.iAngRRoNEW 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63860 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63860 00:10:09.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63860 ']' 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.228 04:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.501 [2024-11-27 04:27:05.835651] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:09.501 [2024-11-27 04:27:05.835810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63860 ] 00:10:09.501 [2024-11-27 04:27:06.010988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.761 [2024-11-27 04:27:06.139759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.020 [2024-11-27 04:27:06.379583] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.020 [2024-11-27 04:27:06.379648] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.280 BaseBdev1_malloc 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.280 true 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.280 [2024-11-27 04:27:06.759023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:10.280 [2024-11-27 04:27:06.759129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.280 [2024-11-27 04:27:06.759159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:10.280 [2024-11-27 04:27:06.759171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.280 [2024-11-27 04:27:06.761716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.280 [2024-11-27 04:27:06.761861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:10.280 BaseBdev1 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.280 BaseBdev2_malloc 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.280 true 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.280 [2024-11-27 04:27:06.832447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:10.280 [2024-11-27 04:27:06.832507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.280 [2024-11-27 04:27:06.832526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:10.280 [2024-11-27 04:27:06.832538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.280 [2024-11-27 04:27:06.834918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.280 [2024-11-27 04:27:06.835007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:10.280 BaseBdev2 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.280 [2024-11-27 04:27:06.844515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.280 [2024-11-27 04:27:06.846676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.280 [2024-11-27 04:27:06.846935] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:10.280 [2024-11-27 04:27:06.846954] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:10.280 [2024-11-27 04:27:06.847281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:10.280 [2024-11-27 04:27:06.847484] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:10.280 [2024-11-27 04:27:06.847503] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:10.280 [2024-11-27 04:27:06.847728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.280 04:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.281 04:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:10.281 04:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.281 04:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.281 04:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.281 04:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.281 04:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.281 04:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.281 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.281 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.540 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.540 04:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.540 "name": "raid_bdev1", 00:10:10.540 "uuid": "02103188-7476-4928-8ce6-d31290d56c62", 00:10:10.540 "strip_size_kb": 0, 00:10:10.540 "state": "online", 00:10:10.540 "raid_level": "raid1", 00:10:10.540 "superblock": true, 00:10:10.540 "num_base_bdevs": 2, 00:10:10.540 "num_base_bdevs_discovered": 2, 00:10:10.540 "num_base_bdevs_operational": 2, 00:10:10.540 "base_bdevs_list": [ 00:10:10.540 { 00:10:10.540 "name": "BaseBdev1", 00:10:10.540 "uuid": "69fbd103-f16a-5acf-8e3f-8ff22f760512", 00:10:10.540 "is_configured": true, 00:10:10.540 "data_offset": 2048, 00:10:10.540 "data_size": 63488 00:10:10.540 }, 00:10:10.540 { 00:10:10.540 "name": "BaseBdev2", 00:10:10.540 "uuid": "a95eb885-8561-5788-aa25-f0a5f886a812", 00:10:10.540 "is_configured": true, 00:10:10.540 "data_offset": 2048, 00:10:10.540 "data_size": 63488 00:10:10.540 } 00:10:10.540 ] 00:10:10.540 }' 00:10:10.540 04:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.540 04:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.800 04:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:10.800 04:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:11.058 [2024-11-27 04:27:07.397122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:11.992 04:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:11.992 04:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.992 04:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.992 [2024-11-27 04:27:08.294328] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:11.992 [2024-11-27 04:27:08.294473] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:11.992 [2024-11-27 04:27:08.294730] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:10:11.992 04:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.992 04:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:11.992 04:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:11.992 04:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:11.992 04:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:10:11.992 04:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:11.992 04:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.992 04:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.992 04:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.993 04:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.993 04:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:11.993 04:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.993 04:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.993 04:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.993 04:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.993 04:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.993 04:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.993 04:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.993 04:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.993 04:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.993 04:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.993 "name": "raid_bdev1", 00:10:11.993 "uuid": "02103188-7476-4928-8ce6-d31290d56c62", 00:10:11.993 "strip_size_kb": 0, 00:10:11.993 "state": "online", 00:10:11.993 "raid_level": "raid1", 00:10:11.993 "superblock": true, 00:10:11.993 "num_base_bdevs": 2, 00:10:11.993 "num_base_bdevs_discovered": 1, 00:10:11.993 "num_base_bdevs_operational": 1, 00:10:11.993 "base_bdevs_list": [ 00:10:11.993 { 00:10:11.993 "name": null, 00:10:11.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.993 "is_configured": false, 00:10:11.993 "data_offset": 0, 00:10:11.993 "data_size": 63488 00:10:11.993 }, 00:10:11.993 { 00:10:11.993 "name": "BaseBdev2", 00:10:11.993 "uuid": "a95eb885-8561-5788-aa25-f0a5f886a812", 00:10:11.993 "is_configured": true, 00:10:11.993 "data_offset": 2048, 00:10:11.993 "data_size": 63488 00:10:11.993 } 00:10:11.993 ] 00:10:11.993 }' 00:10:11.993 04:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.993 04:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.251 04:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:12.251 04:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.251 04:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.251 [2024-11-27 04:27:08.788540] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:12.251 [2024-11-27 04:27:08.788657] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:12.251 [2024-11-27 04:27:08.791625] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:12.251 [2024-11-27 04:27:08.791671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.251 [2024-11-27 04:27:08.791733] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:12.251 [2024-11-27 04:27:08.791746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:12.251 { 00:10:12.251 "results": [ 00:10:12.251 { 00:10:12.251 "job": "raid_bdev1", 00:10:12.251 "core_mask": "0x1", 00:10:12.251 "workload": "randrw", 00:10:12.251 "percentage": 50, 00:10:12.251 "status": "finished", 00:10:12.251 "queue_depth": 1, 00:10:12.251 "io_size": 131072, 00:10:12.251 "runtime": 1.392321, 00:10:12.251 "iops": 18450.4866334703, 00:10:12.251 "mibps": 2306.3108291837875, 00:10:12.251 "io_failed": 0, 00:10:12.251 "io_timeout": 0, 00:10:12.251 "avg_latency_us": 51.0512181228572, 00:10:12.251 "min_latency_us": 24.258515283842794, 00:10:12.251 "max_latency_us": 1609.7816593886462 00:10:12.251 } 00:10:12.251 ], 00:10:12.251 "core_count": 1 00:10:12.251 } 00:10:12.251 04:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.251 04:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63860 00:10:12.251 04:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63860 ']' 00:10:12.251 04:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63860 00:10:12.251 04:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:12.251 04:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:12.251 04:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63860 00:10:12.509 killing process with pid 63860 00:10:12.509 04:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:12.509 04:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:12.509 04:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63860' 00:10:12.509 04:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63860 00:10:12.509 [2024-11-27 04:27:08.836454] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:12.509 04:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63860 00:10:12.509 [2024-11-27 04:27:08.996223] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:13.886 04:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.iAngRRoNEW 00:10:13.886 04:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:13.886 04:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:13.886 04:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:13.886 ************************************ 00:10:13.886 END TEST raid_write_error_test 00:10:13.886 ************************************ 00:10:13.886 04:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:13.886 04:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:13.886 04:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:13.886 04:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:13.886 00:10:13.886 real 0m4.725s 00:10:13.886 user 0m5.677s 00:10:13.886 sys 0m0.559s 00:10:13.886 04:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.886 04:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.146 04:27:10 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:14.146 04:27:10 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:14.146 04:27:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:10:14.146 04:27:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:14.146 04:27:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.146 04:27:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:14.146 ************************************ 00:10:14.146 START TEST raid_state_function_test 00:10:14.146 ************************************ 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64004 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64004' 00:10:14.146 Process raid pid: 64004 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64004 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 64004 ']' 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.146 04:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.146 [2024-11-27 04:27:10.610870] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:14.146 [2024-11-27 04:27:10.611123] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.405 [2024-11-27 04:27:10.792775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.405 [2024-11-27 04:27:10.931095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.663 [2024-11-27 04:27:11.181633] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.663 [2024-11-27 04:27:11.181808] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.233 04:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.233 04:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:15.233 04:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:15.233 04:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.233 04:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.233 [2024-11-27 04:27:11.587032] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:15.233 [2024-11-27 04:27:11.587118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:15.233 [2024-11-27 04:27:11.587133] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:15.233 [2024-11-27 04:27:11.587148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:15.233 [2024-11-27 04:27:11.587159] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:15.233 [2024-11-27 04:27:11.587173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:15.233 04:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.233 04:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:15.233 04:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.233 04:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.233 04:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.233 04:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.233 04:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.233 04:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.233 04:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.233 04:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.233 04:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.233 04:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.233 04:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.233 04:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.233 04:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.233 04:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.233 04:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.233 "name": "Existed_Raid", 00:10:15.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.233 "strip_size_kb": 64, 00:10:15.233 "state": "configuring", 00:10:15.233 "raid_level": "raid0", 00:10:15.233 "superblock": false, 00:10:15.233 "num_base_bdevs": 3, 00:10:15.233 "num_base_bdevs_discovered": 0, 00:10:15.233 "num_base_bdevs_operational": 3, 00:10:15.233 "base_bdevs_list": [ 00:10:15.233 { 00:10:15.233 "name": "BaseBdev1", 00:10:15.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.233 "is_configured": false, 00:10:15.233 "data_offset": 0, 00:10:15.233 "data_size": 0 00:10:15.233 }, 00:10:15.233 { 00:10:15.233 "name": "BaseBdev2", 00:10:15.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.233 "is_configured": false, 00:10:15.233 "data_offset": 0, 00:10:15.233 "data_size": 0 00:10:15.233 }, 00:10:15.233 { 00:10:15.233 "name": "BaseBdev3", 00:10:15.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.233 "is_configured": false, 00:10:15.233 "data_offset": 0, 00:10:15.233 "data_size": 0 00:10:15.233 } 00:10:15.233 ] 00:10:15.233 }' 00:10:15.233 04:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.233 04:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.493 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:15.493 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.493 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.493 [2024-11-27 04:27:12.014355] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:15.493 [2024-11-27 04:27:12.014424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:15.493 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.493 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:15.493 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.493 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.493 [2024-11-27 04:27:12.022329] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:15.493 [2024-11-27 04:27:12.022413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:15.493 [2024-11-27 04:27:12.022426] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:15.493 [2024-11-27 04:27:12.022438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:15.493 [2024-11-27 04:27:12.022446] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:15.493 [2024-11-27 04:27:12.022456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:15.493 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.493 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:15.493 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.493 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.752 [2024-11-27 04:27:12.100525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:15.752 BaseBdev1 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.752 [ 00:10:15.752 { 00:10:15.752 "name": "BaseBdev1", 00:10:15.752 "aliases": [ 00:10:15.752 "54da0d2b-1ed8-4658-8120-66a431ff81c8" 00:10:15.752 ], 00:10:15.752 "product_name": "Malloc disk", 00:10:15.752 "block_size": 512, 00:10:15.752 "num_blocks": 65536, 00:10:15.752 "uuid": "54da0d2b-1ed8-4658-8120-66a431ff81c8", 00:10:15.752 "assigned_rate_limits": { 00:10:15.752 "rw_ios_per_sec": 0, 00:10:15.752 "rw_mbytes_per_sec": 0, 00:10:15.752 "r_mbytes_per_sec": 0, 00:10:15.752 "w_mbytes_per_sec": 0 00:10:15.752 }, 00:10:15.752 "claimed": true, 00:10:15.752 "claim_type": "exclusive_write", 00:10:15.752 "zoned": false, 00:10:15.752 "supported_io_types": { 00:10:15.752 "read": true, 00:10:15.752 "write": true, 00:10:15.752 "unmap": true, 00:10:15.752 "flush": true, 00:10:15.752 "reset": true, 00:10:15.752 "nvme_admin": false, 00:10:15.752 "nvme_io": false, 00:10:15.752 "nvme_io_md": false, 00:10:15.752 "write_zeroes": true, 00:10:15.752 "zcopy": true, 00:10:15.752 "get_zone_info": false, 00:10:15.752 "zone_management": false, 00:10:15.752 "zone_append": false, 00:10:15.752 "compare": false, 00:10:15.752 "compare_and_write": false, 00:10:15.752 "abort": true, 00:10:15.752 "seek_hole": false, 00:10:15.752 "seek_data": false, 00:10:15.752 "copy": true, 00:10:15.752 "nvme_iov_md": false 00:10:15.752 }, 00:10:15.752 "memory_domains": [ 00:10:15.752 { 00:10:15.752 "dma_device_id": "system", 00:10:15.752 "dma_device_type": 1 00:10:15.752 }, 00:10:15.752 { 00:10:15.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.752 "dma_device_type": 2 00:10:15.752 } 00:10:15.752 ], 00:10:15.752 "driver_specific": {} 00:10:15.752 } 00:10:15.752 ] 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.752 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.752 "name": "Existed_Raid", 00:10:15.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.752 "strip_size_kb": 64, 00:10:15.752 "state": "configuring", 00:10:15.752 "raid_level": "raid0", 00:10:15.752 "superblock": false, 00:10:15.752 "num_base_bdevs": 3, 00:10:15.752 "num_base_bdevs_discovered": 1, 00:10:15.752 "num_base_bdevs_operational": 3, 00:10:15.752 "base_bdevs_list": [ 00:10:15.752 { 00:10:15.752 "name": "BaseBdev1", 00:10:15.752 "uuid": "54da0d2b-1ed8-4658-8120-66a431ff81c8", 00:10:15.752 "is_configured": true, 00:10:15.752 "data_offset": 0, 00:10:15.752 "data_size": 65536 00:10:15.752 }, 00:10:15.752 { 00:10:15.752 "name": "BaseBdev2", 00:10:15.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.752 "is_configured": false, 00:10:15.752 "data_offset": 0, 00:10:15.752 "data_size": 0 00:10:15.752 }, 00:10:15.752 { 00:10:15.752 "name": "BaseBdev3", 00:10:15.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.752 "is_configured": false, 00:10:15.752 "data_offset": 0, 00:10:15.753 "data_size": 0 00:10:15.753 } 00:10:15.753 ] 00:10:15.753 }' 00:10:15.753 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.753 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.010 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:16.010 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.010 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.010 [2024-11-27 04:27:12.588139] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:16.010 [2024-11-27 04:27:12.588323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:16.010 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.010 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:16.010 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.010 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.270 [2024-11-27 04:27:12.596258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.270 [2024-11-27 04:27:12.598927] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:16.270 [2024-11-27 04:27:12.599062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:16.270 [2024-11-27 04:27:12.599116] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:16.270 [2024-11-27 04:27:12.599146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:16.270 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.270 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:16.270 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:16.270 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:16.270 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.270 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.270 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.270 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.270 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.270 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.270 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.270 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.270 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.270 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.270 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.270 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.270 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.270 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.270 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.270 "name": "Existed_Raid", 00:10:16.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.270 "strip_size_kb": 64, 00:10:16.270 "state": "configuring", 00:10:16.270 "raid_level": "raid0", 00:10:16.270 "superblock": false, 00:10:16.270 "num_base_bdevs": 3, 00:10:16.270 "num_base_bdevs_discovered": 1, 00:10:16.270 "num_base_bdevs_operational": 3, 00:10:16.270 "base_bdevs_list": [ 00:10:16.270 { 00:10:16.270 "name": "BaseBdev1", 00:10:16.270 "uuid": "54da0d2b-1ed8-4658-8120-66a431ff81c8", 00:10:16.270 "is_configured": true, 00:10:16.270 "data_offset": 0, 00:10:16.270 "data_size": 65536 00:10:16.270 }, 00:10:16.270 { 00:10:16.270 "name": "BaseBdev2", 00:10:16.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.270 "is_configured": false, 00:10:16.270 "data_offset": 0, 00:10:16.270 "data_size": 0 00:10:16.270 }, 00:10:16.270 { 00:10:16.270 "name": "BaseBdev3", 00:10:16.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.270 "is_configured": false, 00:10:16.270 "data_offset": 0, 00:10:16.270 "data_size": 0 00:10:16.270 } 00:10:16.270 ] 00:10:16.270 }' 00:10:16.270 04:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.270 04:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.528 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:16.528 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.528 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.528 [2024-11-27 04:27:13.109592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.528 BaseBdev2 00:10:16.528 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.528 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:16.853 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:16.853 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.853 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:16.853 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.853 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.853 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.853 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.853 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.853 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.853 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:16.853 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.853 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.853 [ 00:10:16.853 { 00:10:16.853 "name": "BaseBdev2", 00:10:16.853 "aliases": [ 00:10:16.853 "8db83454-bfa3-4e4a-bc29-b995addfb392" 00:10:16.853 ], 00:10:16.853 "product_name": "Malloc disk", 00:10:16.853 "block_size": 512, 00:10:16.853 "num_blocks": 65536, 00:10:16.853 "uuid": "8db83454-bfa3-4e4a-bc29-b995addfb392", 00:10:16.853 "assigned_rate_limits": { 00:10:16.853 "rw_ios_per_sec": 0, 00:10:16.853 "rw_mbytes_per_sec": 0, 00:10:16.853 "r_mbytes_per_sec": 0, 00:10:16.853 "w_mbytes_per_sec": 0 00:10:16.853 }, 00:10:16.853 "claimed": true, 00:10:16.853 "claim_type": "exclusive_write", 00:10:16.854 "zoned": false, 00:10:16.854 "supported_io_types": { 00:10:16.854 "read": true, 00:10:16.854 "write": true, 00:10:16.854 "unmap": true, 00:10:16.854 "flush": true, 00:10:16.854 "reset": true, 00:10:16.854 "nvme_admin": false, 00:10:16.854 "nvme_io": false, 00:10:16.854 "nvme_io_md": false, 00:10:16.854 "write_zeroes": true, 00:10:16.854 "zcopy": true, 00:10:16.854 "get_zone_info": false, 00:10:16.854 "zone_management": false, 00:10:16.854 "zone_append": false, 00:10:16.854 "compare": false, 00:10:16.854 "compare_and_write": false, 00:10:16.854 "abort": true, 00:10:16.854 "seek_hole": false, 00:10:16.854 "seek_data": false, 00:10:16.854 "copy": true, 00:10:16.854 "nvme_iov_md": false 00:10:16.854 }, 00:10:16.854 "memory_domains": [ 00:10:16.854 { 00:10:16.854 "dma_device_id": "system", 00:10:16.854 "dma_device_type": 1 00:10:16.854 }, 00:10:16.854 { 00:10:16.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.854 "dma_device_type": 2 00:10:16.854 } 00:10:16.854 ], 00:10:16.854 "driver_specific": {} 00:10:16.854 } 00:10:16.854 ] 00:10:16.854 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.854 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:16.854 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:16.854 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:16.854 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:16.854 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.854 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.854 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.854 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.854 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.854 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.854 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.854 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.854 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.854 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.854 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.854 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.854 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.854 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.854 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.854 "name": "Existed_Raid", 00:10:16.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.854 "strip_size_kb": 64, 00:10:16.854 "state": "configuring", 00:10:16.854 "raid_level": "raid0", 00:10:16.854 "superblock": false, 00:10:16.854 "num_base_bdevs": 3, 00:10:16.854 "num_base_bdevs_discovered": 2, 00:10:16.854 "num_base_bdevs_operational": 3, 00:10:16.854 "base_bdevs_list": [ 00:10:16.854 { 00:10:16.854 "name": "BaseBdev1", 00:10:16.854 "uuid": "54da0d2b-1ed8-4658-8120-66a431ff81c8", 00:10:16.854 "is_configured": true, 00:10:16.854 "data_offset": 0, 00:10:16.854 "data_size": 65536 00:10:16.854 }, 00:10:16.854 { 00:10:16.854 "name": "BaseBdev2", 00:10:16.854 "uuid": "8db83454-bfa3-4e4a-bc29-b995addfb392", 00:10:16.854 "is_configured": true, 00:10:16.854 "data_offset": 0, 00:10:16.854 "data_size": 65536 00:10:16.854 }, 00:10:16.854 { 00:10:16.854 "name": "BaseBdev3", 00:10:16.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.854 "is_configured": false, 00:10:16.854 "data_offset": 0, 00:10:16.854 "data_size": 0 00:10:16.854 } 00:10:16.854 ] 00:10:16.854 }' 00:10:16.854 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.854 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.157 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:17.157 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.157 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.157 [2024-11-27 04:27:13.681370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:17.157 [2024-11-27 04:27:13.681442] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:17.157 [2024-11-27 04:27:13.681462] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:17.157 [2024-11-27 04:27:13.681820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:17.157 [2024-11-27 04:27:13.682055] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:17.157 [2024-11-27 04:27:13.682075] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:17.157 [2024-11-27 04:27:13.682456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.157 BaseBdev3 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.158 [ 00:10:17.158 { 00:10:17.158 "name": "BaseBdev3", 00:10:17.158 "aliases": [ 00:10:17.158 "fa84c146-179a-420b-b5f6-7ccc61131058" 00:10:17.158 ], 00:10:17.158 "product_name": "Malloc disk", 00:10:17.158 "block_size": 512, 00:10:17.158 "num_blocks": 65536, 00:10:17.158 "uuid": "fa84c146-179a-420b-b5f6-7ccc61131058", 00:10:17.158 "assigned_rate_limits": { 00:10:17.158 "rw_ios_per_sec": 0, 00:10:17.158 "rw_mbytes_per_sec": 0, 00:10:17.158 "r_mbytes_per_sec": 0, 00:10:17.158 "w_mbytes_per_sec": 0 00:10:17.158 }, 00:10:17.158 "claimed": true, 00:10:17.158 "claim_type": "exclusive_write", 00:10:17.158 "zoned": false, 00:10:17.158 "supported_io_types": { 00:10:17.158 "read": true, 00:10:17.158 "write": true, 00:10:17.158 "unmap": true, 00:10:17.158 "flush": true, 00:10:17.158 "reset": true, 00:10:17.158 "nvme_admin": false, 00:10:17.158 "nvme_io": false, 00:10:17.158 "nvme_io_md": false, 00:10:17.158 "write_zeroes": true, 00:10:17.158 "zcopy": true, 00:10:17.158 "get_zone_info": false, 00:10:17.158 "zone_management": false, 00:10:17.158 "zone_append": false, 00:10:17.158 "compare": false, 00:10:17.158 "compare_and_write": false, 00:10:17.158 "abort": true, 00:10:17.158 "seek_hole": false, 00:10:17.158 "seek_data": false, 00:10:17.158 "copy": true, 00:10:17.158 "nvme_iov_md": false 00:10:17.158 }, 00:10:17.158 "memory_domains": [ 00:10:17.158 { 00:10:17.158 "dma_device_id": "system", 00:10:17.158 "dma_device_type": 1 00:10:17.158 }, 00:10:17.158 { 00:10:17.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.158 "dma_device_type": 2 00:10:17.158 } 00:10:17.158 ], 00:10:17.158 "driver_specific": {} 00:10:17.158 } 00:10:17.158 ] 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.158 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.417 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.417 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.417 "name": "Existed_Raid", 00:10:17.417 "uuid": "8f59171a-830c-4175-a448-93fa1c1551d6", 00:10:17.417 "strip_size_kb": 64, 00:10:17.417 "state": "online", 00:10:17.417 "raid_level": "raid0", 00:10:17.417 "superblock": false, 00:10:17.417 "num_base_bdevs": 3, 00:10:17.417 "num_base_bdevs_discovered": 3, 00:10:17.417 "num_base_bdevs_operational": 3, 00:10:17.417 "base_bdevs_list": [ 00:10:17.417 { 00:10:17.417 "name": "BaseBdev1", 00:10:17.417 "uuid": "54da0d2b-1ed8-4658-8120-66a431ff81c8", 00:10:17.417 "is_configured": true, 00:10:17.417 "data_offset": 0, 00:10:17.417 "data_size": 65536 00:10:17.417 }, 00:10:17.417 { 00:10:17.417 "name": "BaseBdev2", 00:10:17.417 "uuid": "8db83454-bfa3-4e4a-bc29-b995addfb392", 00:10:17.417 "is_configured": true, 00:10:17.417 "data_offset": 0, 00:10:17.417 "data_size": 65536 00:10:17.418 }, 00:10:17.418 { 00:10:17.418 "name": "BaseBdev3", 00:10:17.418 "uuid": "fa84c146-179a-420b-b5f6-7ccc61131058", 00:10:17.418 "is_configured": true, 00:10:17.418 "data_offset": 0, 00:10:17.418 "data_size": 65536 00:10:17.418 } 00:10:17.418 ] 00:10:17.418 }' 00:10:17.418 04:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.418 04:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.678 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:17.678 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:17.678 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:17.678 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:17.678 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:17.678 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:17.678 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:17.678 04:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.678 04:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.678 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:17.678 [2024-11-27 04:27:14.173252] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.678 04:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.678 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:17.678 "name": "Existed_Raid", 00:10:17.678 "aliases": [ 00:10:17.678 "8f59171a-830c-4175-a448-93fa1c1551d6" 00:10:17.678 ], 00:10:17.678 "product_name": "Raid Volume", 00:10:17.678 "block_size": 512, 00:10:17.678 "num_blocks": 196608, 00:10:17.678 "uuid": "8f59171a-830c-4175-a448-93fa1c1551d6", 00:10:17.678 "assigned_rate_limits": { 00:10:17.678 "rw_ios_per_sec": 0, 00:10:17.678 "rw_mbytes_per_sec": 0, 00:10:17.678 "r_mbytes_per_sec": 0, 00:10:17.678 "w_mbytes_per_sec": 0 00:10:17.678 }, 00:10:17.678 "claimed": false, 00:10:17.678 "zoned": false, 00:10:17.678 "supported_io_types": { 00:10:17.678 "read": true, 00:10:17.678 "write": true, 00:10:17.678 "unmap": true, 00:10:17.678 "flush": true, 00:10:17.678 "reset": true, 00:10:17.678 "nvme_admin": false, 00:10:17.678 "nvme_io": false, 00:10:17.678 "nvme_io_md": false, 00:10:17.678 "write_zeroes": true, 00:10:17.678 "zcopy": false, 00:10:17.678 "get_zone_info": false, 00:10:17.678 "zone_management": false, 00:10:17.678 "zone_append": false, 00:10:17.678 "compare": false, 00:10:17.678 "compare_and_write": false, 00:10:17.678 "abort": false, 00:10:17.678 "seek_hole": false, 00:10:17.678 "seek_data": false, 00:10:17.678 "copy": false, 00:10:17.678 "nvme_iov_md": false 00:10:17.678 }, 00:10:17.678 "memory_domains": [ 00:10:17.678 { 00:10:17.678 "dma_device_id": "system", 00:10:17.678 "dma_device_type": 1 00:10:17.678 }, 00:10:17.678 { 00:10:17.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.678 "dma_device_type": 2 00:10:17.678 }, 00:10:17.678 { 00:10:17.678 "dma_device_id": "system", 00:10:17.678 "dma_device_type": 1 00:10:17.678 }, 00:10:17.678 { 00:10:17.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.678 "dma_device_type": 2 00:10:17.678 }, 00:10:17.678 { 00:10:17.678 "dma_device_id": "system", 00:10:17.678 "dma_device_type": 1 00:10:17.678 }, 00:10:17.678 { 00:10:17.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.678 "dma_device_type": 2 00:10:17.678 } 00:10:17.678 ], 00:10:17.678 "driver_specific": { 00:10:17.678 "raid": { 00:10:17.678 "uuid": "8f59171a-830c-4175-a448-93fa1c1551d6", 00:10:17.678 "strip_size_kb": 64, 00:10:17.678 "state": "online", 00:10:17.678 "raid_level": "raid0", 00:10:17.678 "superblock": false, 00:10:17.678 "num_base_bdevs": 3, 00:10:17.678 "num_base_bdevs_discovered": 3, 00:10:17.678 "num_base_bdevs_operational": 3, 00:10:17.678 "base_bdevs_list": [ 00:10:17.678 { 00:10:17.678 "name": "BaseBdev1", 00:10:17.678 "uuid": "54da0d2b-1ed8-4658-8120-66a431ff81c8", 00:10:17.678 "is_configured": true, 00:10:17.678 "data_offset": 0, 00:10:17.678 "data_size": 65536 00:10:17.678 }, 00:10:17.678 { 00:10:17.678 "name": "BaseBdev2", 00:10:17.678 "uuid": "8db83454-bfa3-4e4a-bc29-b995addfb392", 00:10:17.678 "is_configured": true, 00:10:17.678 "data_offset": 0, 00:10:17.678 "data_size": 65536 00:10:17.678 }, 00:10:17.678 { 00:10:17.678 "name": "BaseBdev3", 00:10:17.678 "uuid": "fa84c146-179a-420b-b5f6-7ccc61131058", 00:10:17.678 "is_configured": true, 00:10:17.678 "data_offset": 0, 00:10:17.678 "data_size": 65536 00:10:17.678 } 00:10:17.678 ] 00:10:17.678 } 00:10:17.678 } 00:10:17.678 }' 00:10:17.678 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:17.938 BaseBdev2 00:10:17.938 BaseBdev3' 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.938 04:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.938 [2024-11-27 04:27:14.456452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:17.938 [2024-11-27 04:27:14.456512] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.938 [2024-11-27 04:27:14.456592] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.197 04:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.197 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:18.197 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:18.197 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:18.197 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:18.197 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:18.197 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:18.197 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.197 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:18.197 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.197 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.197 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:18.197 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.197 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.197 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.197 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.197 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.197 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.197 04:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.197 04:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.197 04:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.197 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.197 "name": "Existed_Raid", 00:10:18.197 "uuid": "8f59171a-830c-4175-a448-93fa1c1551d6", 00:10:18.197 "strip_size_kb": 64, 00:10:18.197 "state": "offline", 00:10:18.197 "raid_level": "raid0", 00:10:18.197 "superblock": false, 00:10:18.197 "num_base_bdevs": 3, 00:10:18.197 "num_base_bdevs_discovered": 2, 00:10:18.197 "num_base_bdevs_operational": 2, 00:10:18.197 "base_bdevs_list": [ 00:10:18.197 { 00:10:18.197 "name": null, 00:10:18.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.197 "is_configured": false, 00:10:18.197 "data_offset": 0, 00:10:18.197 "data_size": 65536 00:10:18.197 }, 00:10:18.197 { 00:10:18.197 "name": "BaseBdev2", 00:10:18.197 "uuid": "8db83454-bfa3-4e4a-bc29-b995addfb392", 00:10:18.197 "is_configured": true, 00:10:18.197 "data_offset": 0, 00:10:18.197 "data_size": 65536 00:10:18.197 }, 00:10:18.197 { 00:10:18.197 "name": "BaseBdev3", 00:10:18.197 "uuid": "fa84c146-179a-420b-b5f6-7ccc61131058", 00:10:18.197 "is_configured": true, 00:10:18.197 "data_offset": 0, 00:10:18.197 "data_size": 65536 00:10:18.197 } 00:10:18.197 ] 00:10:18.197 }' 00:10:18.197 04:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.197 04:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.764 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:18.764 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:18.764 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:18.764 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.764 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.764 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.764 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.764 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:18.764 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:18.764 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:18.764 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.764 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.764 [2024-11-27 04:27:15.126122] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:18.764 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.764 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:18.764 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:18.764 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.764 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:18.764 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.764 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.764 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.764 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:18.764 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:18.764 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:18.764 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.764 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.764 [2024-11-27 04:27:15.313722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:18.764 [2024-11-27 04:27:15.313822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.023 BaseBdev2 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.023 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.023 [ 00:10:19.023 { 00:10:19.023 "name": "BaseBdev2", 00:10:19.023 "aliases": [ 00:10:19.023 "cfcebe54-3675-4176-8da9-f5d94ae48327" 00:10:19.023 ], 00:10:19.023 "product_name": "Malloc disk", 00:10:19.023 "block_size": 512, 00:10:19.023 "num_blocks": 65536, 00:10:19.023 "uuid": "cfcebe54-3675-4176-8da9-f5d94ae48327", 00:10:19.023 "assigned_rate_limits": { 00:10:19.023 "rw_ios_per_sec": 0, 00:10:19.023 "rw_mbytes_per_sec": 0, 00:10:19.023 "r_mbytes_per_sec": 0, 00:10:19.023 "w_mbytes_per_sec": 0 00:10:19.023 }, 00:10:19.023 "claimed": false, 00:10:19.023 "zoned": false, 00:10:19.023 "supported_io_types": { 00:10:19.023 "read": true, 00:10:19.023 "write": true, 00:10:19.023 "unmap": true, 00:10:19.023 "flush": true, 00:10:19.023 "reset": true, 00:10:19.023 "nvme_admin": false, 00:10:19.023 "nvme_io": false, 00:10:19.023 "nvme_io_md": false, 00:10:19.023 "write_zeroes": true, 00:10:19.023 "zcopy": true, 00:10:19.023 "get_zone_info": false, 00:10:19.023 "zone_management": false, 00:10:19.023 "zone_append": false, 00:10:19.023 "compare": false, 00:10:19.023 "compare_and_write": false, 00:10:19.023 "abort": true, 00:10:19.023 "seek_hole": false, 00:10:19.023 "seek_data": false, 00:10:19.023 "copy": true, 00:10:19.023 "nvme_iov_md": false 00:10:19.023 }, 00:10:19.023 "memory_domains": [ 00:10:19.023 { 00:10:19.024 "dma_device_id": "system", 00:10:19.024 "dma_device_type": 1 00:10:19.024 }, 00:10:19.024 { 00:10:19.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.024 "dma_device_type": 2 00:10:19.024 } 00:10:19.024 ], 00:10:19.024 "driver_specific": {} 00:10:19.024 } 00:10:19.024 ] 00:10:19.024 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.024 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:19.024 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:19.024 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:19.024 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:19.024 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.024 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.283 BaseBdev3 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.283 [ 00:10:19.283 { 00:10:19.283 "name": "BaseBdev3", 00:10:19.283 "aliases": [ 00:10:19.283 "91e6b9df-d313-4e8c-b5c6-e2ef9c6729f7" 00:10:19.283 ], 00:10:19.283 "product_name": "Malloc disk", 00:10:19.283 "block_size": 512, 00:10:19.283 "num_blocks": 65536, 00:10:19.283 "uuid": "91e6b9df-d313-4e8c-b5c6-e2ef9c6729f7", 00:10:19.283 "assigned_rate_limits": { 00:10:19.283 "rw_ios_per_sec": 0, 00:10:19.283 "rw_mbytes_per_sec": 0, 00:10:19.283 "r_mbytes_per_sec": 0, 00:10:19.283 "w_mbytes_per_sec": 0 00:10:19.283 }, 00:10:19.283 "claimed": false, 00:10:19.283 "zoned": false, 00:10:19.283 "supported_io_types": { 00:10:19.283 "read": true, 00:10:19.283 "write": true, 00:10:19.283 "unmap": true, 00:10:19.283 "flush": true, 00:10:19.283 "reset": true, 00:10:19.283 "nvme_admin": false, 00:10:19.283 "nvme_io": false, 00:10:19.283 "nvme_io_md": false, 00:10:19.283 "write_zeroes": true, 00:10:19.283 "zcopy": true, 00:10:19.283 "get_zone_info": false, 00:10:19.283 "zone_management": false, 00:10:19.283 "zone_append": false, 00:10:19.283 "compare": false, 00:10:19.283 "compare_and_write": false, 00:10:19.283 "abort": true, 00:10:19.283 "seek_hole": false, 00:10:19.283 "seek_data": false, 00:10:19.283 "copy": true, 00:10:19.283 "nvme_iov_md": false 00:10:19.283 }, 00:10:19.283 "memory_domains": [ 00:10:19.283 { 00:10:19.283 "dma_device_id": "system", 00:10:19.283 "dma_device_type": 1 00:10:19.283 }, 00:10:19.283 { 00:10:19.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.283 "dma_device_type": 2 00:10:19.283 } 00:10:19.283 ], 00:10:19.283 "driver_specific": {} 00:10:19.283 } 00:10:19.283 ] 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.283 [2024-11-27 04:27:15.697260] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:19.283 [2024-11-27 04:27:15.697339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:19.283 [2024-11-27 04:27:15.697371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:19.283 [2024-11-27 04:27:15.699469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.283 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.283 "name": "Existed_Raid", 00:10:19.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.284 "strip_size_kb": 64, 00:10:19.284 "state": "configuring", 00:10:19.284 "raid_level": "raid0", 00:10:19.284 "superblock": false, 00:10:19.284 "num_base_bdevs": 3, 00:10:19.284 "num_base_bdevs_discovered": 2, 00:10:19.284 "num_base_bdevs_operational": 3, 00:10:19.284 "base_bdevs_list": [ 00:10:19.284 { 00:10:19.284 "name": "BaseBdev1", 00:10:19.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.284 "is_configured": false, 00:10:19.284 "data_offset": 0, 00:10:19.284 "data_size": 0 00:10:19.284 }, 00:10:19.284 { 00:10:19.284 "name": "BaseBdev2", 00:10:19.284 "uuid": "cfcebe54-3675-4176-8da9-f5d94ae48327", 00:10:19.284 "is_configured": true, 00:10:19.284 "data_offset": 0, 00:10:19.284 "data_size": 65536 00:10:19.284 }, 00:10:19.284 { 00:10:19.284 "name": "BaseBdev3", 00:10:19.284 "uuid": "91e6b9df-d313-4e8c-b5c6-e2ef9c6729f7", 00:10:19.284 "is_configured": true, 00:10:19.284 "data_offset": 0, 00:10:19.284 "data_size": 65536 00:10:19.284 } 00:10:19.284 ] 00:10:19.284 }' 00:10:19.284 04:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.284 04:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.853 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:19.853 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.853 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.853 [2024-11-27 04:27:16.164449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:19.853 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.853 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:19.853 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.853 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.853 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.853 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.853 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.853 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.853 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.853 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.853 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.853 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.853 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.853 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.853 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.853 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.853 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.853 "name": "Existed_Raid", 00:10:19.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.853 "strip_size_kb": 64, 00:10:19.853 "state": "configuring", 00:10:19.853 "raid_level": "raid0", 00:10:19.853 "superblock": false, 00:10:19.853 "num_base_bdevs": 3, 00:10:19.853 "num_base_bdevs_discovered": 1, 00:10:19.853 "num_base_bdevs_operational": 3, 00:10:19.853 "base_bdevs_list": [ 00:10:19.853 { 00:10:19.853 "name": "BaseBdev1", 00:10:19.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.853 "is_configured": false, 00:10:19.853 "data_offset": 0, 00:10:19.853 "data_size": 0 00:10:19.853 }, 00:10:19.853 { 00:10:19.853 "name": null, 00:10:19.853 "uuid": "cfcebe54-3675-4176-8da9-f5d94ae48327", 00:10:19.853 "is_configured": false, 00:10:19.853 "data_offset": 0, 00:10:19.853 "data_size": 65536 00:10:19.853 }, 00:10:19.853 { 00:10:19.853 "name": "BaseBdev3", 00:10:19.853 "uuid": "91e6b9df-d313-4e8c-b5c6-e2ef9c6729f7", 00:10:19.853 "is_configured": true, 00:10:19.853 "data_offset": 0, 00:10:19.853 "data_size": 65536 00:10:19.853 } 00:10:19.853 ] 00:10:19.853 }' 00:10:19.853 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.854 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.113 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:20.113 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.113 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.113 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.113 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.113 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:20.113 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:20.113 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.113 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.113 [2024-11-27 04:27:16.681206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:20.113 BaseBdev1 00:10:20.113 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.113 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:20.113 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:20.113 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:20.113 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:20.113 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:20.113 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:20.113 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:20.113 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.113 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.113 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.113 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:20.113 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.113 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.372 [ 00:10:20.372 { 00:10:20.372 "name": "BaseBdev1", 00:10:20.372 "aliases": [ 00:10:20.372 "4513ac55-3c32-4f91-9f30-88de7e1a2d90" 00:10:20.372 ], 00:10:20.372 "product_name": "Malloc disk", 00:10:20.372 "block_size": 512, 00:10:20.372 "num_blocks": 65536, 00:10:20.372 "uuid": "4513ac55-3c32-4f91-9f30-88de7e1a2d90", 00:10:20.372 "assigned_rate_limits": { 00:10:20.372 "rw_ios_per_sec": 0, 00:10:20.372 "rw_mbytes_per_sec": 0, 00:10:20.372 "r_mbytes_per_sec": 0, 00:10:20.372 "w_mbytes_per_sec": 0 00:10:20.372 }, 00:10:20.372 "claimed": true, 00:10:20.372 "claim_type": "exclusive_write", 00:10:20.372 "zoned": false, 00:10:20.372 "supported_io_types": { 00:10:20.372 "read": true, 00:10:20.372 "write": true, 00:10:20.372 "unmap": true, 00:10:20.372 "flush": true, 00:10:20.372 "reset": true, 00:10:20.372 "nvme_admin": false, 00:10:20.372 "nvme_io": false, 00:10:20.372 "nvme_io_md": false, 00:10:20.372 "write_zeroes": true, 00:10:20.372 "zcopy": true, 00:10:20.372 "get_zone_info": false, 00:10:20.372 "zone_management": false, 00:10:20.372 "zone_append": false, 00:10:20.372 "compare": false, 00:10:20.372 "compare_and_write": false, 00:10:20.372 "abort": true, 00:10:20.372 "seek_hole": false, 00:10:20.372 "seek_data": false, 00:10:20.372 "copy": true, 00:10:20.372 "nvme_iov_md": false 00:10:20.372 }, 00:10:20.372 "memory_domains": [ 00:10:20.372 { 00:10:20.372 "dma_device_id": "system", 00:10:20.372 "dma_device_type": 1 00:10:20.372 }, 00:10:20.372 { 00:10:20.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.372 "dma_device_type": 2 00:10:20.372 } 00:10:20.372 ], 00:10:20.372 "driver_specific": {} 00:10:20.372 } 00:10:20.372 ] 00:10:20.372 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.372 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:20.372 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:20.372 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.372 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.372 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.372 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.372 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.372 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.372 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.372 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.372 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.372 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.372 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.372 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.372 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.372 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.372 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.372 "name": "Existed_Raid", 00:10:20.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.373 "strip_size_kb": 64, 00:10:20.373 "state": "configuring", 00:10:20.373 "raid_level": "raid0", 00:10:20.373 "superblock": false, 00:10:20.373 "num_base_bdevs": 3, 00:10:20.373 "num_base_bdevs_discovered": 2, 00:10:20.373 "num_base_bdevs_operational": 3, 00:10:20.373 "base_bdevs_list": [ 00:10:20.373 { 00:10:20.373 "name": "BaseBdev1", 00:10:20.373 "uuid": "4513ac55-3c32-4f91-9f30-88de7e1a2d90", 00:10:20.373 "is_configured": true, 00:10:20.373 "data_offset": 0, 00:10:20.373 "data_size": 65536 00:10:20.373 }, 00:10:20.373 { 00:10:20.373 "name": null, 00:10:20.373 "uuid": "cfcebe54-3675-4176-8da9-f5d94ae48327", 00:10:20.373 "is_configured": false, 00:10:20.373 "data_offset": 0, 00:10:20.373 "data_size": 65536 00:10:20.373 }, 00:10:20.373 { 00:10:20.373 "name": "BaseBdev3", 00:10:20.373 "uuid": "91e6b9df-d313-4e8c-b5c6-e2ef9c6729f7", 00:10:20.373 "is_configured": true, 00:10:20.373 "data_offset": 0, 00:10:20.373 "data_size": 65536 00:10:20.373 } 00:10:20.373 ] 00:10:20.373 }' 00:10:20.373 04:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.373 04:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.632 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.632 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:20.632 04:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.632 04:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.632 04:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.632 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:20.632 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:20.632 04:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.632 04:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.632 [2024-11-27 04:27:17.216425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:20.892 04:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.892 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:20.892 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.892 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.892 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.892 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.892 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.892 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.892 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.892 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.892 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.892 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.892 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.892 04:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.892 04:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.892 04:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.892 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.892 "name": "Existed_Raid", 00:10:20.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.892 "strip_size_kb": 64, 00:10:20.892 "state": "configuring", 00:10:20.892 "raid_level": "raid0", 00:10:20.892 "superblock": false, 00:10:20.892 "num_base_bdevs": 3, 00:10:20.892 "num_base_bdevs_discovered": 1, 00:10:20.892 "num_base_bdevs_operational": 3, 00:10:20.892 "base_bdevs_list": [ 00:10:20.892 { 00:10:20.892 "name": "BaseBdev1", 00:10:20.892 "uuid": "4513ac55-3c32-4f91-9f30-88de7e1a2d90", 00:10:20.892 "is_configured": true, 00:10:20.892 "data_offset": 0, 00:10:20.892 "data_size": 65536 00:10:20.892 }, 00:10:20.892 { 00:10:20.892 "name": null, 00:10:20.892 "uuid": "cfcebe54-3675-4176-8da9-f5d94ae48327", 00:10:20.892 "is_configured": false, 00:10:20.892 "data_offset": 0, 00:10:20.892 "data_size": 65536 00:10:20.892 }, 00:10:20.892 { 00:10:20.892 "name": null, 00:10:20.892 "uuid": "91e6b9df-d313-4e8c-b5c6-e2ef9c6729f7", 00:10:20.892 "is_configured": false, 00:10:20.892 "data_offset": 0, 00:10:20.892 "data_size": 65536 00:10:20.892 } 00:10:20.892 ] 00:10:20.892 }' 00:10:20.892 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.892 04:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.151 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:21.151 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.151 04:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.151 04:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.151 04:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.151 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:21.151 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:21.151 04:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.151 04:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.151 [2024-11-27 04:27:17.660048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:21.151 04:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.151 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:21.151 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.151 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.151 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.151 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.151 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.151 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.151 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.151 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.151 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.151 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.151 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.151 04:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.151 04:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.151 04:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.151 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.151 "name": "Existed_Raid", 00:10:21.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.151 "strip_size_kb": 64, 00:10:21.151 "state": "configuring", 00:10:21.151 "raid_level": "raid0", 00:10:21.151 "superblock": false, 00:10:21.151 "num_base_bdevs": 3, 00:10:21.151 "num_base_bdevs_discovered": 2, 00:10:21.151 "num_base_bdevs_operational": 3, 00:10:21.151 "base_bdevs_list": [ 00:10:21.151 { 00:10:21.152 "name": "BaseBdev1", 00:10:21.152 "uuid": "4513ac55-3c32-4f91-9f30-88de7e1a2d90", 00:10:21.152 "is_configured": true, 00:10:21.152 "data_offset": 0, 00:10:21.152 "data_size": 65536 00:10:21.152 }, 00:10:21.152 { 00:10:21.152 "name": null, 00:10:21.152 "uuid": "cfcebe54-3675-4176-8da9-f5d94ae48327", 00:10:21.152 "is_configured": false, 00:10:21.152 "data_offset": 0, 00:10:21.152 "data_size": 65536 00:10:21.152 }, 00:10:21.152 { 00:10:21.152 "name": "BaseBdev3", 00:10:21.152 "uuid": "91e6b9df-d313-4e8c-b5c6-e2ef9c6729f7", 00:10:21.152 "is_configured": true, 00:10:21.152 "data_offset": 0, 00:10:21.152 "data_size": 65536 00:10:21.152 } 00:10:21.152 ] 00:10:21.152 }' 00:10:21.152 04:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.152 04:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.720 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.720 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:21.720 04:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.720 04:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.720 04:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.720 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:21.720 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:21.720 04:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.721 04:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.721 [2024-11-27 04:27:18.155232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:21.721 04:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.721 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:21.721 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.721 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.721 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.721 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.721 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.721 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.721 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.721 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.721 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.721 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.721 04:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.721 04:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.721 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.721 04:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.981 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.981 "name": "Existed_Raid", 00:10:21.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.981 "strip_size_kb": 64, 00:10:21.981 "state": "configuring", 00:10:21.981 "raid_level": "raid0", 00:10:21.981 "superblock": false, 00:10:21.981 "num_base_bdevs": 3, 00:10:21.981 "num_base_bdevs_discovered": 1, 00:10:21.981 "num_base_bdevs_operational": 3, 00:10:21.981 "base_bdevs_list": [ 00:10:21.981 { 00:10:21.981 "name": null, 00:10:21.981 "uuid": "4513ac55-3c32-4f91-9f30-88de7e1a2d90", 00:10:21.981 "is_configured": false, 00:10:21.981 "data_offset": 0, 00:10:21.981 "data_size": 65536 00:10:21.981 }, 00:10:21.981 { 00:10:21.981 "name": null, 00:10:21.981 "uuid": "cfcebe54-3675-4176-8da9-f5d94ae48327", 00:10:21.981 "is_configured": false, 00:10:21.981 "data_offset": 0, 00:10:21.981 "data_size": 65536 00:10:21.981 }, 00:10:21.981 { 00:10:21.981 "name": "BaseBdev3", 00:10:21.981 "uuid": "91e6b9df-d313-4e8c-b5c6-e2ef9c6729f7", 00:10:21.981 "is_configured": true, 00:10:21.981 "data_offset": 0, 00:10:21.981 "data_size": 65536 00:10:21.981 } 00:10:21.981 ] 00:10:21.981 }' 00:10:21.981 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.981 04:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.241 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.241 04:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.241 04:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.241 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:22.241 04:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.241 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:22.241 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:22.241 04:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.241 04:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.241 [2024-11-27 04:27:18.797141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:22.241 04:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.241 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:22.241 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.241 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.241 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.241 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.241 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.241 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.241 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.241 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.241 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.241 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.241 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.242 04:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.242 04:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.502 04:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.502 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.502 "name": "Existed_Raid", 00:10:22.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.502 "strip_size_kb": 64, 00:10:22.502 "state": "configuring", 00:10:22.502 "raid_level": "raid0", 00:10:22.502 "superblock": false, 00:10:22.502 "num_base_bdevs": 3, 00:10:22.502 "num_base_bdevs_discovered": 2, 00:10:22.502 "num_base_bdevs_operational": 3, 00:10:22.502 "base_bdevs_list": [ 00:10:22.502 { 00:10:22.502 "name": null, 00:10:22.502 "uuid": "4513ac55-3c32-4f91-9f30-88de7e1a2d90", 00:10:22.502 "is_configured": false, 00:10:22.502 "data_offset": 0, 00:10:22.502 "data_size": 65536 00:10:22.502 }, 00:10:22.502 { 00:10:22.502 "name": "BaseBdev2", 00:10:22.502 "uuid": "cfcebe54-3675-4176-8da9-f5d94ae48327", 00:10:22.502 "is_configured": true, 00:10:22.502 "data_offset": 0, 00:10:22.502 "data_size": 65536 00:10:22.502 }, 00:10:22.502 { 00:10:22.502 "name": "BaseBdev3", 00:10:22.502 "uuid": "91e6b9df-d313-4e8c-b5c6-e2ef9c6729f7", 00:10:22.502 "is_configured": true, 00:10:22.502 "data_offset": 0, 00:10:22.502 "data_size": 65536 00:10:22.502 } 00:10:22.502 ] 00:10:22.502 }' 00:10:22.502 04:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.502 04:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.764 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:22.764 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.764 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.764 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.764 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.764 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:22.764 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.764 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.764 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.764 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:22.764 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4513ac55-3c32-4f91-9f30-88de7e1a2d90 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.027 [2024-11-27 04:27:19.423400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:23.027 [2024-11-27 04:27:19.423474] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:23.027 [2024-11-27 04:27:19.423488] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:23.027 [2024-11-27 04:27:19.423804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:23.027 [2024-11-27 04:27:19.424038] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:23.027 [2024-11-27 04:27:19.424058] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:23.027 [2024-11-27 04:27:19.424439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.027 NewBaseBdev 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.027 [ 00:10:23.027 { 00:10:23.027 "name": "NewBaseBdev", 00:10:23.027 "aliases": [ 00:10:23.027 "4513ac55-3c32-4f91-9f30-88de7e1a2d90" 00:10:23.027 ], 00:10:23.027 "product_name": "Malloc disk", 00:10:23.027 "block_size": 512, 00:10:23.027 "num_blocks": 65536, 00:10:23.027 "uuid": "4513ac55-3c32-4f91-9f30-88de7e1a2d90", 00:10:23.027 "assigned_rate_limits": { 00:10:23.027 "rw_ios_per_sec": 0, 00:10:23.027 "rw_mbytes_per_sec": 0, 00:10:23.027 "r_mbytes_per_sec": 0, 00:10:23.027 "w_mbytes_per_sec": 0 00:10:23.027 }, 00:10:23.027 "claimed": true, 00:10:23.027 "claim_type": "exclusive_write", 00:10:23.027 "zoned": false, 00:10:23.027 "supported_io_types": { 00:10:23.027 "read": true, 00:10:23.027 "write": true, 00:10:23.027 "unmap": true, 00:10:23.027 "flush": true, 00:10:23.027 "reset": true, 00:10:23.027 "nvme_admin": false, 00:10:23.027 "nvme_io": false, 00:10:23.027 "nvme_io_md": false, 00:10:23.027 "write_zeroes": true, 00:10:23.027 "zcopy": true, 00:10:23.027 "get_zone_info": false, 00:10:23.027 "zone_management": false, 00:10:23.027 "zone_append": false, 00:10:23.027 "compare": false, 00:10:23.027 "compare_and_write": false, 00:10:23.027 "abort": true, 00:10:23.027 "seek_hole": false, 00:10:23.027 "seek_data": false, 00:10:23.027 "copy": true, 00:10:23.027 "nvme_iov_md": false 00:10:23.027 }, 00:10:23.027 "memory_domains": [ 00:10:23.027 { 00:10:23.027 "dma_device_id": "system", 00:10:23.027 "dma_device_type": 1 00:10:23.027 }, 00:10:23.027 { 00:10:23.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.027 "dma_device_type": 2 00:10:23.027 } 00:10:23.027 ], 00:10:23.027 "driver_specific": {} 00:10:23.027 } 00:10:23.027 ] 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.027 "name": "Existed_Raid", 00:10:23.027 "uuid": "cc672058-c320-49d8-9d2d-284e3b98e8f0", 00:10:23.027 "strip_size_kb": 64, 00:10:23.027 "state": "online", 00:10:23.027 "raid_level": "raid0", 00:10:23.027 "superblock": false, 00:10:23.027 "num_base_bdevs": 3, 00:10:23.027 "num_base_bdevs_discovered": 3, 00:10:23.027 "num_base_bdevs_operational": 3, 00:10:23.027 "base_bdevs_list": [ 00:10:23.027 { 00:10:23.027 "name": "NewBaseBdev", 00:10:23.027 "uuid": "4513ac55-3c32-4f91-9f30-88de7e1a2d90", 00:10:23.027 "is_configured": true, 00:10:23.027 "data_offset": 0, 00:10:23.027 "data_size": 65536 00:10:23.027 }, 00:10:23.027 { 00:10:23.027 "name": "BaseBdev2", 00:10:23.027 "uuid": "cfcebe54-3675-4176-8da9-f5d94ae48327", 00:10:23.027 "is_configured": true, 00:10:23.027 "data_offset": 0, 00:10:23.027 "data_size": 65536 00:10:23.027 }, 00:10:23.027 { 00:10:23.027 "name": "BaseBdev3", 00:10:23.027 "uuid": "91e6b9df-d313-4e8c-b5c6-e2ef9c6729f7", 00:10:23.027 "is_configured": true, 00:10:23.027 "data_offset": 0, 00:10:23.027 "data_size": 65536 00:10:23.027 } 00:10:23.027 ] 00:10:23.027 }' 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.027 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.595 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:23.595 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:23.595 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:23.595 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:23.595 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:23.595 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:23.595 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:23.595 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:23.595 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.595 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.595 [2024-11-27 04:27:19.887137] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.595 04:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.595 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:23.595 "name": "Existed_Raid", 00:10:23.595 "aliases": [ 00:10:23.595 "cc672058-c320-49d8-9d2d-284e3b98e8f0" 00:10:23.595 ], 00:10:23.595 "product_name": "Raid Volume", 00:10:23.595 "block_size": 512, 00:10:23.595 "num_blocks": 196608, 00:10:23.595 "uuid": "cc672058-c320-49d8-9d2d-284e3b98e8f0", 00:10:23.595 "assigned_rate_limits": { 00:10:23.595 "rw_ios_per_sec": 0, 00:10:23.595 "rw_mbytes_per_sec": 0, 00:10:23.595 "r_mbytes_per_sec": 0, 00:10:23.595 "w_mbytes_per_sec": 0 00:10:23.595 }, 00:10:23.595 "claimed": false, 00:10:23.595 "zoned": false, 00:10:23.595 "supported_io_types": { 00:10:23.595 "read": true, 00:10:23.595 "write": true, 00:10:23.595 "unmap": true, 00:10:23.595 "flush": true, 00:10:23.595 "reset": true, 00:10:23.595 "nvme_admin": false, 00:10:23.595 "nvme_io": false, 00:10:23.595 "nvme_io_md": false, 00:10:23.595 "write_zeroes": true, 00:10:23.595 "zcopy": false, 00:10:23.595 "get_zone_info": false, 00:10:23.595 "zone_management": false, 00:10:23.595 "zone_append": false, 00:10:23.595 "compare": false, 00:10:23.595 "compare_and_write": false, 00:10:23.595 "abort": false, 00:10:23.595 "seek_hole": false, 00:10:23.595 "seek_data": false, 00:10:23.595 "copy": false, 00:10:23.595 "nvme_iov_md": false 00:10:23.595 }, 00:10:23.595 "memory_domains": [ 00:10:23.595 { 00:10:23.595 "dma_device_id": "system", 00:10:23.595 "dma_device_type": 1 00:10:23.595 }, 00:10:23.595 { 00:10:23.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.595 "dma_device_type": 2 00:10:23.595 }, 00:10:23.595 { 00:10:23.595 "dma_device_id": "system", 00:10:23.595 "dma_device_type": 1 00:10:23.595 }, 00:10:23.595 { 00:10:23.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.595 "dma_device_type": 2 00:10:23.595 }, 00:10:23.595 { 00:10:23.595 "dma_device_id": "system", 00:10:23.595 "dma_device_type": 1 00:10:23.595 }, 00:10:23.595 { 00:10:23.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.595 "dma_device_type": 2 00:10:23.595 } 00:10:23.595 ], 00:10:23.595 "driver_specific": { 00:10:23.595 "raid": { 00:10:23.595 "uuid": "cc672058-c320-49d8-9d2d-284e3b98e8f0", 00:10:23.595 "strip_size_kb": 64, 00:10:23.595 "state": "online", 00:10:23.595 "raid_level": "raid0", 00:10:23.595 "superblock": false, 00:10:23.595 "num_base_bdevs": 3, 00:10:23.595 "num_base_bdevs_discovered": 3, 00:10:23.595 "num_base_bdevs_operational": 3, 00:10:23.595 "base_bdevs_list": [ 00:10:23.595 { 00:10:23.595 "name": "NewBaseBdev", 00:10:23.595 "uuid": "4513ac55-3c32-4f91-9f30-88de7e1a2d90", 00:10:23.595 "is_configured": true, 00:10:23.595 "data_offset": 0, 00:10:23.595 "data_size": 65536 00:10:23.595 }, 00:10:23.595 { 00:10:23.595 "name": "BaseBdev2", 00:10:23.595 "uuid": "cfcebe54-3675-4176-8da9-f5d94ae48327", 00:10:23.595 "is_configured": true, 00:10:23.595 "data_offset": 0, 00:10:23.595 "data_size": 65536 00:10:23.595 }, 00:10:23.595 { 00:10:23.595 "name": "BaseBdev3", 00:10:23.595 "uuid": "91e6b9df-d313-4e8c-b5c6-e2ef9c6729f7", 00:10:23.595 "is_configured": true, 00:10:23.595 "data_offset": 0, 00:10:23.595 "data_size": 65536 00:10:23.595 } 00:10:23.595 ] 00:10:23.596 } 00:10:23.596 } 00:10:23.596 }' 00:10:23.596 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:23.596 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:23.596 BaseBdev2 00:10:23.596 BaseBdev3' 00:10:23.596 04:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.596 [2024-11-27 04:27:20.158334] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:23.596 [2024-11-27 04:27:20.158390] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:23.596 [2024-11-27 04:27:20.158532] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.596 [2024-11-27 04:27:20.158615] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.596 [2024-11-27 04:27:20.158636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64004 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 64004 ']' 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 64004 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:23.596 04:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64004 00:10:23.865 killing process with pid 64004 00:10:23.865 04:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:23.865 04:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:23.865 04:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64004' 00:10:23.865 04:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 64004 00:10:23.865 [2024-11-27 04:27:20.206672] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:23.865 04:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 64004 00:10:24.123 [2024-11-27 04:27:20.597616] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:25.502 00:10:25.502 real 0m11.396s 00:10:25.502 user 0m17.836s 00:10:25.502 sys 0m1.977s 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.502 ************************************ 00:10:25.502 END TEST raid_state_function_test 00:10:25.502 ************************************ 00:10:25.502 04:27:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:10:25.502 04:27:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:25.502 04:27:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.502 04:27:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.502 ************************************ 00:10:25.502 START TEST raid_state_function_test_sb 00:10:25.502 ************************************ 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64636 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64636' 00:10:25.502 Process raid pid: 64636 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64636 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64636 ']' 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.502 04:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.502 [2024-11-27 04:27:22.057260] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:25.502 [2024-11-27 04:27:22.057502] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.761 [2024-11-27 04:27:22.233706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.020 [2024-11-27 04:27:22.361722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.020 [2024-11-27 04:27:22.595256] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.021 [2024-11-27 04:27:22.595373] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.590 04:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.590 04:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:26.590 04:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:26.590 04:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.590 04:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.590 [2024-11-27 04:27:22.933296] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.590 [2024-11-27 04:27:22.933352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.590 [2024-11-27 04:27:22.933363] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.590 [2024-11-27 04:27:22.933373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.590 [2024-11-27 04:27:22.933379] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.590 [2024-11-27 04:27:22.933388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.590 04:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.590 04:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:26.590 04:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.590 04:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.590 04:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.590 04:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.590 04:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.590 04:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.591 04:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.591 04:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.591 04:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.591 04:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.591 04:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.591 04:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.591 04:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.591 04:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.591 04:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.591 "name": "Existed_Raid", 00:10:26.591 "uuid": "b2183d6f-ba48-45cb-b06c-5f723a8884b3", 00:10:26.591 "strip_size_kb": 64, 00:10:26.591 "state": "configuring", 00:10:26.591 "raid_level": "raid0", 00:10:26.591 "superblock": true, 00:10:26.591 "num_base_bdevs": 3, 00:10:26.591 "num_base_bdevs_discovered": 0, 00:10:26.591 "num_base_bdevs_operational": 3, 00:10:26.591 "base_bdevs_list": [ 00:10:26.591 { 00:10:26.591 "name": "BaseBdev1", 00:10:26.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.591 "is_configured": false, 00:10:26.591 "data_offset": 0, 00:10:26.591 "data_size": 0 00:10:26.591 }, 00:10:26.591 { 00:10:26.591 "name": "BaseBdev2", 00:10:26.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.591 "is_configured": false, 00:10:26.591 "data_offset": 0, 00:10:26.591 "data_size": 0 00:10:26.591 }, 00:10:26.591 { 00:10:26.591 "name": "BaseBdev3", 00:10:26.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.591 "is_configured": false, 00:10:26.591 "data_offset": 0, 00:10:26.591 "data_size": 0 00:10:26.591 } 00:10:26.591 ] 00:10:26.591 }' 00:10:26.591 04:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.591 04:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.850 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:26.850 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.850 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.850 [2024-11-27 04:27:23.396486] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.850 [2024-11-27 04:27:23.396584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:26.850 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.850 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:26.850 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.850 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.850 [2024-11-27 04:27:23.404477] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.850 [2024-11-27 04:27:23.404575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.850 [2024-11-27 04:27:23.404627] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.850 [2024-11-27 04:27:23.404675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.850 [2024-11-27 04:27:23.404722] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.850 [2024-11-27 04:27:23.404766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.850 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.850 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:26.850 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.850 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.111 [2024-11-27 04:27:23.451467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.111 BaseBdev1 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.111 [ 00:10:27.111 { 00:10:27.111 "name": "BaseBdev1", 00:10:27.111 "aliases": [ 00:10:27.111 "e2640838-284d-49e0-9c3b-023f001b1bf8" 00:10:27.111 ], 00:10:27.111 "product_name": "Malloc disk", 00:10:27.111 "block_size": 512, 00:10:27.111 "num_blocks": 65536, 00:10:27.111 "uuid": "e2640838-284d-49e0-9c3b-023f001b1bf8", 00:10:27.111 "assigned_rate_limits": { 00:10:27.111 "rw_ios_per_sec": 0, 00:10:27.111 "rw_mbytes_per_sec": 0, 00:10:27.111 "r_mbytes_per_sec": 0, 00:10:27.111 "w_mbytes_per_sec": 0 00:10:27.111 }, 00:10:27.111 "claimed": true, 00:10:27.111 "claim_type": "exclusive_write", 00:10:27.111 "zoned": false, 00:10:27.111 "supported_io_types": { 00:10:27.111 "read": true, 00:10:27.111 "write": true, 00:10:27.111 "unmap": true, 00:10:27.111 "flush": true, 00:10:27.111 "reset": true, 00:10:27.111 "nvme_admin": false, 00:10:27.111 "nvme_io": false, 00:10:27.111 "nvme_io_md": false, 00:10:27.111 "write_zeroes": true, 00:10:27.111 "zcopy": true, 00:10:27.111 "get_zone_info": false, 00:10:27.111 "zone_management": false, 00:10:27.111 "zone_append": false, 00:10:27.111 "compare": false, 00:10:27.111 "compare_and_write": false, 00:10:27.111 "abort": true, 00:10:27.111 "seek_hole": false, 00:10:27.111 "seek_data": false, 00:10:27.111 "copy": true, 00:10:27.111 "nvme_iov_md": false 00:10:27.111 }, 00:10:27.111 "memory_domains": [ 00:10:27.111 { 00:10:27.111 "dma_device_id": "system", 00:10:27.111 "dma_device_type": 1 00:10:27.111 }, 00:10:27.111 { 00:10:27.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.111 "dma_device_type": 2 00:10:27.111 } 00:10:27.111 ], 00:10:27.111 "driver_specific": {} 00:10:27.111 } 00:10:27.111 ] 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.111 "name": "Existed_Raid", 00:10:27.111 "uuid": "716b8e1b-bd3c-45c5-ab79-9f3b65162d63", 00:10:27.111 "strip_size_kb": 64, 00:10:27.111 "state": "configuring", 00:10:27.111 "raid_level": "raid0", 00:10:27.111 "superblock": true, 00:10:27.111 "num_base_bdevs": 3, 00:10:27.111 "num_base_bdevs_discovered": 1, 00:10:27.111 "num_base_bdevs_operational": 3, 00:10:27.111 "base_bdevs_list": [ 00:10:27.111 { 00:10:27.111 "name": "BaseBdev1", 00:10:27.111 "uuid": "e2640838-284d-49e0-9c3b-023f001b1bf8", 00:10:27.111 "is_configured": true, 00:10:27.111 "data_offset": 2048, 00:10:27.111 "data_size": 63488 00:10:27.111 }, 00:10:27.111 { 00:10:27.111 "name": "BaseBdev2", 00:10:27.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.111 "is_configured": false, 00:10:27.111 "data_offset": 0, 00:10:27.111 "data_size": 0 00:10:27.111 }, 00:10:27.111 { 00:10:27.111 "name": "BaseBdev3", 00:10:27.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.111 "is_configured": false, 00:10:27.111 "data_offset": 0, 00:10:27.111 "data_size": 0 00:10:27.111 } 00:10:27.111 ] 00:10:27.111 }' 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.111 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.371 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:27.371 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.371 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.371 [2024-11-27 04:27:23.914850] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:27.371 [2024-11-27 04:27:23.914978] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:27.371 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.371 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:27.371 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.371 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.371 [2024-11-27 04:27:23.926902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.371 [2024-11-27 04:27:23.928970] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:27.371 [2024-11-27 04:27:23.929022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:27.371 [2024-11-27 04:27:23.929033] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:27.371 [2024-11-27 04:27:23.929044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:27.371 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.371 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:27.371 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.371 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:27.371 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.371 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.371 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.371 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.371 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.371 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.371 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.371 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.371 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.371 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.371 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.371 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.371 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.371 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.630 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.630 "name": "Existed_Raid", 00:10:27.630 "uuid": "1bd33c1c-63cb-4781-8710-6a2d0a6e3282", 00:10:27.630 "strip_size_kb": 64, 00:10:27.630 "state": "configuring", 00:10:27.630 "raid_level": "raid0", 00:10:27.630 "superblock": true, 00:10:27.630 "num_base_bdevs": 3, 00:10:27.630 "num_base_bdevs_discovered": 1, 00:10:27.630 "num_base_bdevs_operational": 3, 00:10:27.630 "base_bdevs_list": [ 00:10:27.630 { 00:10:27.630 "name": "BaseBdev1", 00:10:27.630 "uuid": "e2640838-284d-49e0-9c3b-023f001b1bf8", 00:10:27.630 "is_configured": true, 00:10:27.630 "data_offset": 2048, 00:10:27.630 "data_size": 63488 00:10:27.630 }, 00:10:27.630 { 00:10:27.630 "name": "BaseBdev2", 00:10:27.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.630 "is_configured": false, 00:10:27.630 "data_offset": 0, 00:10:27.630 "data_size": 0 00:10:27.630 }, 00:10:27.630 { 00:10:27.630 "name": "BaseBdev3", 00:10:27.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.630 "is_configured": false, 00:10:27.630 "data_offset": 0, 00:10:27.630 "data_size": 0 00:10:27.630 } 00:10:27.630 ] 00:10:27.630 }' 00:10:27.630 04:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.630 04:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.888 04:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:27.889 04:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.889 04:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.889 [2024-11-27 04:27:24.436468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.889 BaseBdev2 00:10:27.889 04:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.889 04:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:27.889 04:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:27.889 04:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.889 04:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:27.889 04:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.889 04:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.889 04:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:27.889 04:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.889 04:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.889 04:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.889 04:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:27.889 04:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.889 04:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.889 [ 00:10:27.889 { 00:10:27.889 "name": "BaseBdev2", 00:10:27.889 "aliases": [ 00:10:27.889 "4a802ad1-5674-493e-ae30-df81c1b0a406" 00:10:27.889 ], 00:10:27.889 "product_name": "Malloc disk", 00:10:27.889 "block_size": 512, 00:10:27.889 "num_blocks": 65536, 00:10:27.889 "uuid": "4a802ad1-5674-493e-ae30-df81c1b0a406", 00:10:27.889 "assigned_rate_limits": { 00:10:27.889 "rw_ios_per_sec": 0, 00:10:27.889 "rw_mbytes_per_sec": 0, 00:10:27.889 "r_mbytes_per_sec": 0, 00:10:27.889 "w_mbytes_per_sec": 0 00:10:27.889 }, 00:10:27.889 "claimed": true, 00:10:27.889 "claim_type": "exclusive_write", 00:10:27.889 "zoned": false, 00:10:27.889 "supported_io_types": { 00:10:27.889 "read": true, 00:10:27.889 "write": true, 00:10:27.889 "unmap": true, 00:10:27.889 "flush": true, 00:10:27.889 "reset": true, 00:10:27.889 "nvme_admin": false, 00:10:27.889 "nvme_io": false, 00:10:27.889 "nvme_io_md": false, 00:10:27.889 "write_zeroes": true, 00:10:27.889 "zcopy": true, 00:10:27.889 "get_zone_info": false, 00:10:27.889 "zone_management": false, 00:10:27.889 "zone_append": false, 00:10:27.889 "compare": false, 00:10:27.889 "compare_and_write": false, 00:10:27.889 "abort": true, 00:10:27.889 "seek_hole": false, 00:10:27.889 "seek_data": false, 00:10:27.889 "copy": true, 00:10:27.889 "nvme_iov_md": false 00:10:27.889 }, 00:10:27.889 "memory_domains": [ 00:10:27.889 { 00:10:27.889 "dma_device_id": "system", 00:10:27.889 "dma_device_type": 1 00:10:27.889 }, 00:10:27.889 { 00:10:27.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.889 "dma_device_type": 2 00:10:27.889 } 00:10:27.889 ], 00:10:27.889 "driver_specific": {} 00:10:27.889 } 00:10:27.889 ] 00:10:28.148 04:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.148 04:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:28.148 04:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:28.148 04:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:28.148 04:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:28.148 04:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.148 04:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.148 04:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.148 04:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.148 04:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.148 04:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.148 04:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.148 04:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.148 04:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.148 04:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.148 04:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.148 04:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.148 04:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.148 04:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.148 04:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.148 "name": "Existed_Raid", 00:10:28.148 "uuid": "1bd33c1c-63cb-4781-8710-6a2d0a6e3282", 00:10:28.148 "strip_size_kb": 64, 00:10:28.148 "state": "configuring", 00:10:28.148 "raid_level": "raid0", 00:10:28.148 "superblock": true, 00:10:28.148 "num_base_bdevs": 3, 00:10:28.148 "num_base_bdevs_discovered": 2, 00:10:28.148 "num_base_bdevs_operational": 3, 00:10:28.148 "base_bdevs_list": [ 00:10:28.148 { 00:10:28.148 "name": "BaseBdev1", 00:10:28.148 "uuid": "e2640838-284d-49e0-9c3b-023f001b1bf8", 00:10:28.148 "is_configured": true, 00:10:28.148 "data_offset": 2048, 00:10:28.148 "data_size": 63488 00:10:28.148 }, 00:10:28.148 { 00:10:28.148 "name": "BaseBdev2", 00:10:28.148 "uuid": "4a802ad1-5674-493e-ae30-df81c1b0a406", 00:10:28.148 "is_configured": true, 00:10:28.148 "data_offset": 2048, 00:10:28.148 "data_size": 63488 00:10:28.148 }, 00:10:28.148 { 00:10:28.148 "name": "BaseBdev3", 00:10:28.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.148 "is_configured": false, 00:10:28.148 "data_offset": 0, 00:10:28.148 "data_size": 0 00:10:28.148 } 00:10:28.148 ] 00:10:28.148 }' 00:10:28.148 04:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.148 04:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.407 04:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:28.407 04:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.408 04:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.668 [2024-11-27 04:27:24.999951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:28.668 [2024-11-27 04:27:25.000318] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:28.668 [2024-11-27 04:27:25.000348] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:28.668 BaseBdev3 00:10:28.668 [2024-11-27 04:27:25.000655] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:28.668 [2024-11-27 04:27:25.000839] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:28.668 [2024-11-27 04:27:25.000851] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:28.668 [2024-11-27 04:27:25.001032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.668 [ 00:10:28.668 { 00:10:28.668 "name": "BaseBdev3", 00:10:28.668 "aliases": [ 00:10:28.668 "83e010b0-cc9d-46a1-98db-1e69865c8415" 00:10:28.668 ], 00:10:28.668 "product_name": "Malloc disk", 00:10:28.668 "block_size": 512, 00:10:28.668 "num_blocks": 65536, 00:10:28.668 "uuid": "83e010b0-cc9d-46a1-98db-1e69865c8415", 00:10:28.668 "assigned_rate_limits": { 00:10:28.668 "rw_ios_per_sec": 0, 00:10:28.668 "rw_mbytes_per_sec": 0, 00:10:28.668 "r_mbytes_per_sec": 0, 00:10:28.668 "w_mbytes_per_sec": 0 00:10:28.668 }, 00:10:28.668 "claimed": true, 00:10:28.668 "claim_type": "exclusive_write", 00:10:28.668 "zoned": false, 00:10:28.668 "supported_io_types": { 00:10:28.668 "read": true, 00:10:28.668 "write": true, 00:10:28.668 "unmap": true, 00:10:28.668 "flush": true, 00:10:28.668 "reset": true, 00:10:28.668 "nvme_admin": false, 00:10:28.668 "nvme_io": false, 00:10:28.668 "nvme_io_md": false, 00:10:28.668 "write_zeroes": true, 00:10:28.668 "zcopy": true, 00:10:28.668 "get_zone_info": false, 00:10:28.668 "zone_management": false, 00:10:28.668 "zone_append": false, 00:10:28.668 "compare": false, 00:10:28.668 "compare_and_write": false, 00:10:28.668 "abort": true, 00:10:28.668 "seek_hole": false, 00:10:28.668 "seek_data": false, 00:10:28.668 "copy": true, 00:10:28.668 "nvme_iov_md": false 00:10:28.668 }, 00:10:28.668 "memory_domains": [ 00:10:28.668 { 00:10:28.668 "dma_device_id": "system", 00:10:28.668 "dma_device_type": 1 00:10:28.668 }, 00:10:28.668 { 00:10:28.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.668 "dma_device_type": 2 00:10:28.668 } 00:10:28.668 ], 00:10:28.668 "driver_specific": {} 00:10:28.668 } 00:10:28.668 ] 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.668 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.668 "name": "Existed_Raid", 00:10:28.668 "uuid": "1bd33c1c-63cb-4781-8710-6a2d0a6e3282", 00:10:28.668 "strip_size_kb": 64, 00:10:28.668 "state": "online", 00:10:28.668 "raid_level": "raid0", 00:10:28.668 "superblock": true, 00:10:28.668 "num_base_bdevs": 3, 00:10:28.668 "num_base_bdevs_discovered": 3, 00:10:28.668 "num_base_bdevs_operational": 3, 00:10:28.668 "base_bdevs_list": [ 00:10:28.668 { 00:10:28.668 "name": "BaseBdev1", 00:10:28.668 "uuid": "e2640838-284d-49e0-9c3b-023f001b1bf8", 00:10:28.668 "is_configured": true, 00:10:28.668 "data_offset": 2048, 00:10:28.668 "data_size": 63488 00:10:28.668 }, 00:10:28.668 { 00:10:28.669 "name": "BaseBdev2", 00:10:28.669 "uuid": "4a802ad1-5674-493e-ae30-df81c1b0a406", 00:10:28.669 "is_configured": true, 00:10:28.669 "data_offset": 2048, 00:10:28.669 "data_size": 63488 00:10:28.669 }, 00:10:28.669 { 00:10:28.669 "name": "BaseBdev3", 00:10:28.669 "uuid": "83e010b0-cc9d-46a1-98db-1e69865c8415", 00:10:28.669 "is_configured": true, 00:10:28.669 "data_offset": 2048, 00:10:28.669 "data_size": 63488 00:10:28.669 } 00:10:28.669 ] 00:10:28.669 }' 00:10:28.669 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.669 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.928 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:28.928 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:28.928 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:28.928 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:28.928 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:28.928 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:28.928 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:28.928 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.928 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.928 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:28.928 [2024-11-27 04:27:25.487567] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:28.928 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.188 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:29.188 "name": "Existed_Raid", 00:10:29.188 "aliases": [ 00:10:29.188 "1bd33c1c-63cb-4781-8710-6a2d0a6e3282" 00:10:29.188 ], 00:10:29.188 "product_name": "Raid Volume", 00:10:29.188 "block_size": 512, 00:10:29.188 "num_blocks": 190464, 00:10:29.188 "uuid": "1bd33c1c-63cb-4781-8710-6a2d0a6e3282", 00:10:29.188 "assigned_rate_limits": { 00:10:29.188 "rw_ios_per_sec": 0, 00:10:29.188 "rw_mbytes_per_sec": 0, 00:10:29.188 "r_mbytes_per_sec": 0, 00:10:29.188 "w_mbytes_per_sec": 0 00:10:29.188 }, 00:10:29.188 "claimed": false, 00:10:29.188 "zoned": false, 00:10:29.188 "supported_io_types": { 00:10:29.188 "read": true, 00:10:29.188 "write": true, 00:10:29.188 "unmap": true, 00:10:29.188 "flush": true, 00:10:29.188 "reset": true, 00:10:29.188 "nvme_admin": false, 00:10:29.188 "nvme_io": false, 00:10:29.188 "nvme_io_md": false, 00:10:29.188 "write_zeroes": true, 00:10:29.188 "zcopy": false, 00:10:29.188 "get_zone_info": false, 00:10:29.188 "zone_management": false, 00:10:29.188 "zone_append": false, 00:10:29.188 "compare": false, 00:10:29.188 "compare_and_write": false, 00:10:29.188 "abort": false, 00:10:29.188 "seek_hole": false, 00:10:29.188 "seek_data": false, 00:10:29.188 "copy": false, 00:10:29.188 "nvme_iov_md": false 00:10:29.188 }, 00:10:29.188 "memory_domains": [ 00:10:29.188 { 00:10:29.188 "dma_device_id": "system", 00:10:29.188 "dma_device_type": 1 00:10:29.188 }, 00:10:29.188 { 00:10:29.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.188 "dma_device_type": 2 00:10:29.188 }, 00:10:29.188 { 00:10:29.188 "dma_device_id": "system", 00:10:29.188 "dma_device_type": 1 00:10:29.188 }, 00:10:29.188 { 00:10:29.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.188 "dma_device_type": 2 00:10:29.188 }, 00:10:29.188 { 00:10:29.188 "dma_device_id": "system", 00:10:29.188 "dma_device_type": 1 00:10:29.188 }, 00:10:29.188 { 00:10:29.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.188 "dma_device_type": 2 00:10:29.188 } 00:10:29.188 ], 00:10:29.188 "driver_specific": { 00:10:29.188 "raid": { 00:10:29.188 "uuid": "1bd33c1c-63cb-4781-8710-6a2d0a6e3282", 00:10:29.188 "strip_size_kb": 64, 00:10:29.188 "state": "online", 00:10:29.188 "raid_level": "raid0", 00:10:29.188 "superblock": true, 00:10:29.188 "num_base_bdevs": 3, 00:10:29.188 "num_base_bdevs_discovered": 3, 00:10:29.188 "num_base_bdevs_operational": 3, 00:10:29.188 "base_bdevs_list": [ 00:10:29.188 { 00:10:29.188 "name": "BaseBdev1", 00:10:29.188 "uuid": "e2640838-284d-49e0-9c3b-023f001b1bf8", 00:10:29.188 "is_configured": true, 00:10:29.188 "data_offset": 2048, 00:10:29.188 "data_size": 63488 00:10:29.188 }, 00:10:29.188 { 00:10:29.188 "name": "BaseBdev2", 00:10:29.188 "uuid": "4a802ad1-5674-493e-ae30-df81c1b0a406", 00:10:29.189 "is_configured": true, 00:10:29.189 "data_offset": 2048, 00:10:29.189 "data_size": 63488 00:10:29.189 }, 00:10:29.189 { 00:10:29.189 "name": "BaseBdev3", 00:10:29.189 "uuid": "83e010b0-cc9d-46a1-98db-1e69865c8415", 00:10:29.189 "is_configured": true, 00:10:29.189 "data_offset": 2048, 00:10:29.189 "data_size": 63488 00:10:29.189 } 00:10:29.189 ] 00:10:29.189 } 00:10:29.189 } 00:10:29.189 }' 00:10:29.189 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:29.189 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:29.189 BaseBdev2 00:10:29.189 BaseBdev3' 00:10:29.189 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.189 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:29.189 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.189 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:29.189 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.189 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.189 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.189 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.189 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.189 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.189 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.189 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:29.189 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.189 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.189 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.189 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.189 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.189 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.189 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.189 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:29.189 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.189 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.189 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.189 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.449 [2024-11-27 04:27:25.778794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:29.449 [2024-11-27 04:27:25.778826] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:29.449 [2024-11-27 04:27:25.778886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.449 "name": "Existed_Raid", 00:10:29.449 "uuid": "1bd33c1c-63cb-4781-8710-6a2d0a6e3282", 00:10:29.449 "strip_size_kb": 64, 00:10:29.449 "state": "offline", 00:10:29.449 "raid_level": "raid0", 00:10:29.449 "superblock": true, 00:10:29.449 "num_base_bdevs": 3, 00:10:29.449 "num_base_bdevs_discovered": 2, 00:10:29.449 "num_base_bdevs_operational": 2, 00:10:29.449 "base_bdevs_list": [ 00:10:29.449 { 00:10:29.449 "name": null, 00:10:29.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.449 "is_configured": false, 00:10:29.449 "data_offset": 0, 00:10:29.449 "data_size": 63488 00:10:29.449 }, 00:10:29.449 { 00:10:29.449 "name": "BaseBdev2", 00:10:29.449 "uuid": "4a802ad1-5674-493e-ae30-df81c1b0a406", 00:10:29.449 "is_configured": true, 00:10:29.449 "data_offset": 2048, 00:10:29.449 "data_size": 63488 00:10:29.449 }, 00:10:29.449 { 00:10:29.449 "name": "BaseBdev3", 00:10:29.449 "uuid": "83e010b0-cc9d-46a1-98db-1e69865c8415", 00:10:29.449 "is_configured": true, 00:10:29.449 "data_offset": 2048, 00:10:29.449 "data_size": 63488 00:10:29.449 } 00:10:29.449 ] 00:10:29.449 }' 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.449 04:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.018 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:30.018 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:30.018 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.018 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:30.018 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.018 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.018 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.018 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:30.018 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:30.018 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:30.018 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.018 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.018 [2024-11-27 04:27:26.426744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:30.018 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.018 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:30.018 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:30.018 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.018 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.018 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.018 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:30.018 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.018 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:30.018 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:30.018 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:30.018 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.018 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.018 [2024-11-27 04:27:26.595703] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:30.018 [2024-11-27 04:27:26.595776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.279 BaseBdev2 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.279 [ 00:10:30.279 { 00:10:30.279 "name": "BaseBdev2", 00:10:30.279 "aliases": [ 00:10:30.279 "0964879c-dbf2-4091-b0fc-06348a1fcadb" 00:10:30.279 ], 00:10:30.279 "product_name": "Malloc disk", 00:10:30.279 "block_size": 512, 00:10:30.279 "num_blocks": 65536, 00:10:30.279 "uuid": "0964879c-dbf2-4091-b0fc-06348a1fcadb", 00:10:30.279 "assigned_rate_limits": { 00:10:30.279 "rw_ios_per_sec": 0, 00:10:30.279 "rw_mbytes_per_sec": 0, 00:10:30.279 "r_mbytes_per_sec": 0, 00:10:30.279 "w_mbytes_per_sec": 0 00:10:30.279 }, 00:10:30.279 "claimed": false, 00:10:30.279 "zoned": false, 00:10:30.279 "supported_io_types": { 00:10:30.279 "read": true, 00:10:30.279 "write": true, 00:10:30.279 "unmap": true, 00:10:30.279 "flush": true, 00:10:30.279 "reset": true, 00:10:30.279 "nvme_admin": false, 00:10:30.279 "nvme_io": false, 00:10:30.279 "nvme_io_md": false, 00:10:30.279 "write_zeroes": true, 00:10:30.279 "zcopy": true, 00:10:30.279 "get_zone_info": false, 00:10:30.279 "zone_management": false, 00:10:30.279 "zone_append": false, 00:10:30.279 "compare": false, 00:10:30.279 "compare_and_write": false, 00:10:30.279 "abort": true, 00:10:30.279 "seek_hole": false, 00:10:30.279 "seek_data": false, 00:10:30.279 "copy": true, 00:10:30.279 "nvme_iov_md": false 00:10:30.279 }, 00:10:30.279 "memory_domains": [ 00:10:30.279 { 00:10:30.279 "dma_device_id": "system", 00:10:30.279 "dma_device_type": 1 00:10:30.279 }, 00:10:30.279 { 00:10:30.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.279 "dma_device_type": 2 00:10:30.279 } 00:10:30.279 ], 00:10:30.279 "driver_specific": {} 00:10:30.279 } 00:10:30.279 ] 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.279 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.540 BaseBdev3 00:10:30.540 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.540 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:30.540 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:30.540 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.540 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:30.540 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.540 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.540 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.540 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.540 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.540 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.540 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:30.540 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.540 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.540 [ 00:10:30.540 { 00:10:30.540 "name": "BaseBdev3", 00:10:30.540 "aliases": [ 00:10:30.540 "e32cad93-b3c9-4f17-a803-341208c7a5b2" 00:10:30.540 ], 00:10:30.540 "product_name": "Malloc disk", 00:10:30.540 "block_size": 512, 00:10:30.540 "num_blocks": 65536, 00:10:30.540 "uuid": "e32cad93-b3c9-4f17-a803-341208c7a5b2", 00:10:30.540 "assigned_rate_limits": { 00:10:30.540 "rw_ios_per_sec": 0, 00:10:30.540 "rw_mbytes_per_sec": 0, 00:10:30.540 "r_mbytes_per_sec": 0, 00:10:30.540 "w_mbytes_per_sec": 0 00:10:30.540 }, 00:10:30.540 "claimed": false, 00:10:30.540 "zoned": false, 00:10:30.540 "supported_io_types": { 00:10:30.540 "read": true, 00:10:30.540 "write": true, 00:10:30.540 "unmap": true, 00:10:30.540 "flush": true, 00:10:30.540 "reset": true, 00:10:30.540 "nvme_admin": false, 00:10:30.540 "nvme_io": false, 00:10:30.540 "nvme_io_md": false, 00:10:30.540 "write_zeroes": true, 00:10:30.540 "zcopy": true, 00:10:30.540 "get_zone_info": false, 00:10:30.540 "zone_management": false, 00:10:30.540 "zone_append": false, 00:10:30.540 "compare": false, 00:10:30.540 "compare_and_write": false, 00:10:30.540 "abort": true, 00:10:30.540 "seek_hole": false, 00:10:30.540 "seek_data": false, 00:10:30.540 "copy": true, 00:10:30.540 "nvme_iov_md": false 00:10:30.540 }, 00:10:30.540 "memory_domains": [ 00:10:30.540 { 00:10:30.540 "dma_device_id": "system", 00:10:30.540 "dma_device_type": 1 00:10:30.540 }, 00:10:30.540 { 00:10:30.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.540 "dma_device_type": 2 00:10:30.540 } 00:10:30.540 ], 00:10:30.540 "driver_specific": {} 00:10:30.540 } 00:10:30.540 ] 00:10:30.540 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.540 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:30.540 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:30.540 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.540 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:30.541 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.541 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.541 [2024-11-27 04:27:26.933557] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:30.541 [2024-11-27 04:27:26.933682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:30.541 [2024-11-27 04:27:26.933759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:30.541 [2024-11-27 04:27:26.935957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:30.541 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.541 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:30.541 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.541 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.541 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.541 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.541 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.541 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.541 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.541 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.541 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.541 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.541 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.541 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.541 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.541 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.541 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.541 "name": "Existed_Raid", 00:10:30.541 "uuid": "ba6d75dd-a3c4-452d-a219-b576589920ce", 00:10:30.541 "strip_size_kb": 64, 00:10:30.541 "state": "configuring", 00:10:30.541 "raid_level": "raid0", 00:10:30.541 "superblock": true, 00:10:30.541 "num_base_bdevs": 3, 00:10:30.541 "num_base_bdevs_discovered": 2, 00:10:30.541 "num_base_bdevs_operational": 3, 00:10:30.541 "base_bdevs_list": [ 00:10:30.541 { 00:10:30.541 "name": "BaseBdev1", 00:10:30.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.541 "is_configured": false, 00:10:30.541 "data_offset": 0, 00:10:30.541 "data_size": 0 00:10:30.541 }, 00:10:30.541 { 00:10:30.541 "name": "BaseBdev2", 00:10:30.541 "uuid": "0964879c-dbf2-4091-b0fc-06348a1fcadb", 00:10:30.541 "is_configured": true, 00:10:30.541 "data_offset": 2048, 00:10:30.541 "data_size": 63488 00:10:30.541 }, 00:10:30.541 { 00:10:30.541 "name": "BaseBdev3", 00:10:30.541 "uuid": "e32cad93-b3c9-4f17-a803-341208c7a5b2", 00:10:30.541 "is_configured": true, 00:10:30.541 "data_offset": 2048, 00:10:30.541 "data_size": 63488 00:10:30.541 } 00:10:30.541 ] 00:10:30.541 }' 00:10:30.541 04:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.541 04:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.801 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:30.801 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.801 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.801 [2024-11-27 04:27:27.344853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:30.801 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.801 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:30.801 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.801 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.801 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.801 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.801 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.801 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.801 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.801 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.801 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.801 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.801 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.801 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.801 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.801 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.060 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.060 "name": "Existed_Raid", 00:10:31.060 "uuid": "ba6d75dd-a3c4-452d-a219-b576589920ce", 00:10:31.060 "strip_size_kb": 64, 00:10:31.060 "state": "configuring", 00:10:31.060 "raid_level": "raid0", 00:10:31.060 "superblock": true, 00:10:31.060 "num_base_bdevs": 3, 00:10:31.060 "num_base_bdevs_discovered": 1, 00:10:31.060 "num_base_bdevs_operational": 3, 00:10:31.060 "base_bdevs_list": [ 00:10:31.060 { 00:10:31.060 "name": "BaseBdev1", 00:10:31.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.060 "is_configured": false, 00:10:31.060 "data_offset": 0, 00:10:31.060 "data_size": 0 00:10:31.060 }, 00:10:31.060 { 00:10:31.060 "name": null, 00:10:31.060 "uuid": "0964879c-dbf2-4091-b0fc-06348a1fcadb", 00:10:31.060 "is_configured": false, 00:10:31.060 "data_offset": 0, 00:10:31.060 "data_size": 63488 00:10:31.060 }, 00:10:31.060 { 00:10:31.060 "name": "BaseBdev3", 00:10:31.060 "uuid": "e32cad93-b3c9-4f17-a803-341208c7a5b2", 00:10:31.060 "is_configured": true, 00:10:31.060 "data_offset": 2048, 00:10:31.060 "data_size": 63488 00:10:31.060 } 00:10:31.060 ] 00:10:31.060 }' 00:10:31.060 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.060 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.319 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.319 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.319 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.319 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:31.319 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.319 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:31.320 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:31.320 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.320 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.579 [2024-11-27 04:27:27.930226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.579 BaseBdev1 00:10:31.579 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.579 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:31.579 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:31.579 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:31.579 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:31.579 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:31.579 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:31.579 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:31.579 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.579 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.579 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.580 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:31.580 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.580 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.580 [ 00:10:31.580 { 00:10:31.580 "name": "BaseBdev1", 00:10:31.580 "aliases": [ 00:10:31.580 "0924a47b-0b74-4b78-9c6b-c2878d2d3eca" 00:10:31.580 ], 00:10:31.580 "product_name": "Malloc disk", 00:10:31.580 "block_size": 512, 00:10:31.580 "num_blocks": 65536, 00:10:31.580 "uuid": "0924a47b-0b74-4b78-9c6b-c2878d2d3eca", 00:10:31.580 "assigned_rate_limits": { 00:10:31.580 "rw_ios_per_sec": 0, 00:10:31.580 "rw_mbytes_per_sec": 0, 00:10:31.580 "r_mbytes_per_sec": 0, 00:10:31.580 "w_mbytes_per_sec": 0 00:10:31.580 }, 00:10:31.580 "claimed": true, 00:10:31.580 "claim_type": "exclusive_write", 00:10:31.580 "zoned": false, 00:10:31.580 "supported_io_types": { 00:10:31.580 "read": true, 00:10:31.580 "write": true, 00:10:31.580 "unmap": true, 00:10:31.580 "flush": true, 00:10:31.580 "reset": true, 00:10:31.580 "nvme_admin": false, 00:10:31.580 "nvme_io": false, 00:10:31.580 "nvme_io_md": false, 00:10:31.580 "write_zeroes": true, 00:10:31.580 "zcopy": true, 00:10:31.580 "get_zone_info": false, 00:10:31.580 "zone_management": false, 00:10:31.580 "zone_append": false, 00:10:31.580 "compare": false, 00:10:31.580 "compare_and_write": false, 00:10:31.580 "abort": true, 00:10:31.580 "seek_hole": false, 00:10:31.580 "seek_data": false, 00:10:31.580 "copy": true, 00:10:31.580 "nvme_iov_md": false 00:10:31.580 }, 00:10:31.580 "memory_domains": [ 00:10:31.580 { 00:10:31.580 "dma_device_id": "system", 00:10:31.580 "dma_device_type": 1 00:10:31.580 }, 00:10:31.580 { 00:10:31.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.580 "dma_device_type": 2 00:10:31.580 } 00:10:31.580 ], 00:10:31.580 "driver_specific": {} 00:10:31.580 } 00:10:31.580 ] 00:10:31.580 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.580 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:31.580 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:31.580 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.580 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.580 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.580 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.580 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.580 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.580 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.580 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.580 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.580 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.580 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.580 04:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.580 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.580 04:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.580 04:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.580 "name": "Existed_Raid", 00:10:31.580 "uuid": "ba6d75dd-a3c4-452d-a219-b576589920ce", 00:10:31.580 "strip_size_kb": 64, 00:10:31.580 "state": "configuring", 00:10:31.580 "raid_level": "raid0", 00:10:31.580 "superblock": true, 00:10:31.580 "num_base_bdevs": 3, 00:10:31.580 "num_base_bdevs_discovered": 2, 00:10:31.580 "num_base_bdevs_operational": 3, 00:10:31.580 "base_bdevs_list": [ 00:10:31.580 { 00:10:31.580 "name": "BaseBdev1", 00:10:31.580 "uuid": "0924a47b-0b74-4b78-9c6b-c2878d2d3eca", 00:10:31.580 "is_configured": true, 00:10:31.580 "data_offset": 2048, 00:10:31.580 "data_size": 63488 00:10:31.580 }, 00:10:31.580 { 00:10:31.580 "name": null, 00:10:31.580 "uuid": "0964879c-dbf2-4091-b0fc-06348a1fcadb", 00:10:31.580 "is_configured": false, 00:10:31.580 "data_offset": 0, 00:10:31.580 "data_size": 63488 00:10:31.580 }, 00:10:31.580 { 00:10:31.580 "name": "BaseBdev3", 00:10:31.580 "uuid": "e32cad93-b3c9-4f17-a803-341208c7a5b2", 00:10:31.580 "is_configured": true, 00:10:31.580 "data_offset": 2048, 00:10:31.580 "data_size": 63488 00:10:31.580 } 00:10:31.580 ] 00:10:31.580 }' 00:10:31.580 04:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.580 04:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.149 [2024-11-27 04:27:28.525310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.149 "name": "Existed_Raid", 00:10:32.149 "uuid": "ba6d75dd-a3c4-452d-a219-b576589920ce", 00:10:32.149 "strip_size_kb": 64, 00:10:32.149 "state": "configuring", 00:10:32.149 "raid_level": "raid0", 00:10:32.149 "superblock": true, 00:10:32.149 "num_base_bdevs": 3, 00:10:32.149 "num_base_bdevs_discovered": 1, 00:10:32.149 "num_base_bdevs_operational": 3, 00:10:32.149 "base_bdevs_list": [ 00:10:32.149 { 00:10:32.149 "name": "BaseBdev1", 00:10:32.149 "uuid": "0924a47b-0b74-4b78-9c6b-c2878d2d3eca", 00:10:32.149 "is_configured": true, 00:10:32.149 "data_offset": 2048, 00:10:32.149 "data_size": 63488 00:10:32.149 }, 00:10:32.149 { 00:10:32.149 "name": null, 00:10:32.149 "uuid": "0964879c-dbf2-4091-b0fc-06348a1fcadb", 00:10:32.149 "is_configured": false, 00:10:32.149 "data_offset": 0, 00:10:32.149 "data_size": 63488 00:10:32.149 }, 00:10:32.149 { 00:10:32.149 "name": null, 00:10:32.149 "uuid": "e32cad93-b3c9-4f17-a803-341208c7a5b2", 00:10:32.149 "is_configured": false, 00:10:32.149 "data_offset": 0, 00:10:32.149 "data_size": 63488 00:10:32.149 } 00:10:32.149 ] 00:10:32.149 }' 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.149 04:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.409 04:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.409 04:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.409 04:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.409 04:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:32.409 04:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.669 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:32.669 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:32.669 04:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.669 04:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.669 [2024-11-27 04:27:29.032569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:32.669 04:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.669 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:32.669 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.669 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.669 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.669 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.669 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.669 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.669 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.669 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.669 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.669 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.669 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.669 04:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.669 04:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.669 04:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.669 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.669 "name": "Existed_Raid", 00:10:32.669 "uuid": "ba6d75dd-a3c4-452d-a219-b576589920ce", 00:10:32.669 "strip_size_kb": 64, 00:10:32.669 "state": "configuring", 00:10:32.669 "raid_level": "raid0", 00:10:32.669 "superblock": true, 00:10:32.669 "num_base_bdevs": 3, 00:10:32.669 "num_base_bdevs_discovered": 2, 00:10:32.669 "num_base_bdevs_operational": 3, 00:10:32.669 "base_bdevs_list": [ 00:10:32.669 { 00:10:32.669 "name": "BaseBdev1", 00:10:32.669 "uuid": "0924a47b-0b74-4b78-9c6b-c2878d2d3eca", 00:10:32.669 "is_configured": true, 00:10:32.669 "data_offset": 2048, 00:10:32.669 "data_size": 63488 00:10:32.669 }, 00:10:32.669 { 00:10:32.669 "name": null, 00:10:32.669 "uuid": "0964879c-dbf2-4091-b0fc-06348a1fcadb", 00:10:32.669 "is_configured": false, 00:10:32.669 "data_offset": 0, 00:10:32.669 "data_size": 63488 00:10:32.669 }, 00:10:32.669 { 00:10:32.669 "name": "BaseBdev3", 00:10:32.669 "uuid": "e32cad93-b3c9-4f17-a803-341208c7a5b2", 00:10:32.669 "is_configured": true, 00:10:32.669 "data_offset": 2048, 00:10:32.669 "data_size": 63488 00:10:32.669 } 00:10:32.669 ] 00:10:32.669 }' 00:10:32.669 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.669 04:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.928 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.928 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:32.928 04:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.928 04:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.187 04:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.187 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:33.187 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:33.187 04:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.187 04:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.187 [2024-11-27 04:27:29.543958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:33.187 04:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.187 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:33.187 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.187 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.187 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.187 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.187 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.187 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.187 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.187 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.187 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.187 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.187 04:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.187 04:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.187 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.187 04:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.187 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.187 "name": "Existed_Raid", 00:10:33.187 "uuid": "ba6d75dd-a3c4-452d-a219-b576589920ce", 00:10:33.187 "strip_size_kb": 64, 00:10:33.187 "state": "configuring", 00:10:33.187 "raid_level": "raid0", 00:10:33.187 "superblock": true, 00:10:33.187 "num_base_bdevs": 3, 00:10:33.187 "num_base_bdevs_discovered": 1, 00:10:33.187 "num_base_bdevs_operational": 3, 00:10:33.187 "base_bdevs_list": [ 00:10:33.187 { 00:10:33.187 "name": null, 00:10:33.187 "uuid": "0924a47b-0b74-4b78-9c6b-c2878d2d3eca", 00:10:33.187 "is_configured": false, 00:10:33.187 "data_offset": 0, 00:10:33.187 "data_size": 63488 00:10:33.187 }, 00:10:33.187 { 00:10:33.187 "name": null, 00:10:33.187 "uuid": "0964879c-dbf2-4091-b0fc-06348a1fcadb", 00:10:33.187 "is_configured": false, 00:10:33.187 "data_offset": 0, 00:10:33.187 "data_size": 63488 00:10:33.187 }, 00:10:33.187 { 00:10:33.187 "name": "BaseBdev3", 00:10:33.187 "uuid": "e32cad93-b3c9-4f17-a803-341208c7a5b2", 00:10:33.187 "is_configured": true, 00:10:33.187 "data_offset": 2048, 00:10:33.187 "data_size": 63488 00:10:33.187 } 00:10:33.187 ] 00:10:33.187 }' 00:10:33.188 04:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.188 04:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.756 [2024-11-27 04:27:30.167669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.756 "name": "Existed_Raid", 00:10:33.756 "uuid": "ba6d75dd-a3c4-452d-a219-b576589920ce", 00:10:33.756 "strip_size_kb": 64, 00:10:33.756 "state": "configuring", 00:10:33.756 "raid_level": "raid0", 00:10:33.756 "superblock": true, 00:10:33.756 "num_base_bdevs": 3, 00:10:33.756 "num_base_bdevs_discovered": 2, 00:10:33.756 "num_base_bdevs_operational": 3, 00:10:33.756 "base_bdevs_list": [ 00:10:33.756 { 00:10:33.756 "name": null, 00:10:33.756 "uuid": "0924a47b-0b74-4b78-9c6b-c2878d2d3eca", 00:10:33.756 "is_configured": false, 00:10:33.756 "data_offset": 0, 00:10:33.756 "data_size": 63488 00:10:33.756 }, 00:10:33.756 { 00:10:33.756 "name": "BaseBdev2", 00:10:33.756 "uuid": "0964879c-dbf2-4091-b0fc-06348a1fcadb", 00:10:33.756 "is_configured": true, 00:10:33.756 "data_offset": 2048, 00:10:33.756 "data_size": 63488 00:10:33.756 }, 00:10:33.756 { 00:10:33.756 "name": "BaseBdev3", 00:10:33.756 "uuid": "e32cad93-b3c9-4f17-a803-341208c7a5b2", 00:10:33.756 "is_configured": true, 00:10:33.756 "data_offset": 2048, 00:10:33.756 "data_size": 63488 00:10:33.756 } 00:10:33.756 ] 00:10:33.756 }' 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.756 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.325 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0924a47b-0b74-4b78-9c6b-c2878d2d3eca 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.326 [2024-11-27 04:27:30.809182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:34.326 NewBaseBdev 00:10:34.326 [2024-11-27 04:27:30.809577] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:34.326 [2024-11-27 04:27:30.809601] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:34.326 [2024-11-27 04:27:30.809893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:34.326 [2024-11-27 04:27:30.810067] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:34.326 [2024-11-27 04:27:30.810079] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:34.326 [2024-11-27 04:27:30.810263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.326 [ 00:10:34.326 { 00:10:34.326 "name": "NewBaseBdev", 00:10:34.326 "aliases": [ 00:10:34.326 "0924a47b-0b74-4b78-9c6b-c2878d2d3eca" 00:10:34.326 ], 00:10:34.326 "product_name": "Malloc disk", 00:10:34.326 "block_size": 512, 00:10:34.326 "num_blocks": 65536, 00:10:34.326 "uuid": "0924a47b-0b74-4b78-9c6b-c2878d2d3eca", 00:10:34.326 "assigned_rate_limits": { 00:10:34.326 "rw_ios_per_sec": 0, 00:10:34.326 "rw_mbytes_per_sec": 0, 00:10:34.326 "r_mbytes_per_sec": 0, 00:10:34.326 "w_mbytes_per_sec": 0 00:10:34.326 }, 00:10:34.326 "claimed": true, 00:10:34.326 "claim_type": "exclusive_write", 00:10:34.326 "zoned": false, 00:10:34.326 "supported_io_types": { 00:10:34.326 "read": true, 00:10:34.326 "write": true, 00:10:34.326 "unmap": true, 00:10:34.326 "flush": true, 00:10:34.326 "reset": true, 00:10:34.326 "nvme_admin": false, 00:10:34.326 "nvme_io": false, 00:10:34.326 "nvme_io_md": false, 00:10:34.326 "write_zeroes": true, 00:10:34.326 "zcopy": true, 00:10:34.326 "get_zone_info": false, 00:10:34.326 "zone_management": false, 00:10:34.326 "zone_append": false, 00:10:34.326 "compare": false, 00:10:34.326 "compare_and_write": false, 00:10:34.326 "abort": true, 00:10:34.326 "seek_hole": false, 00:10:34.326 "seek_data": false, 00:10:34.326 "copy": true, 00:10:34.326 "nvme_iov_md": false 00:10:34.326 }, 00:10:34.326 "memory_domains": [ 00:10:34.326 { 00:10:34.326 "dma_device_id": "system", 00:10:34.326 "dma_device_type": 1 00:10:34.326 }, 00:10:34.326 { 00:10:34.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.326 "dma_device_type": 2 00:10:34.326 } 00:10:34.326 ], 00:10:34.326 "driver_specific": {} 00:10:34.326 } 00:10:34.326 ] 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.326 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.326 "name": "Existed_Raid", 00:10:34.326 "uuid": "ba6d75dd-a3c4-452d-a219-b576589920ce", 00:10:34.326 "strip_size_kb": 64, 00:10:34.326 "state": "online", 00:10:34.326 "raid_level": "raid0", 00:10:34.326 "superblock": true, 00:10:34.326 "num_base_bdevs": 3, 00:10:34.326 "num_base_bdevs_discovered": 3, 00:10:34.326 "num_base_bdevs_operational": 3, 00:10:34.326 "base_bdevs_list": [ 00:10:34.326 { 00:10:34.326 "name": "NewBaseBdev", 00:10:34.326 "uuid": "0924a47b-0b74-4b78-9c6b-c2878d2d3eca", 00:10:34.326 "is_configured": true, 00:10:34.326 "data_offset": 2048, 00:10:34.326 "data_size": 63488 00:10:34.326 }, 00:10:34.326 { 00:10:34.326 "name": "BaseBdev2", 00:10:34.326 "uuid": "0964879c-dbf2-4091-b0fc-06348a1fcadb", 00:10:34.326 "is_configured": true, 00:10:34.326 "data_offset": 2048, 00:10:34.326 "data_size": 63488 00:10:34.326 }, 00:10:34.327 { 00:10:34.327 "name": "BaseBdev3", 00:10:34.327 "uuid": "e32cad93-b3c9-4f17-a803-341208c7a5b2", 00:10:34.327 "is_configured": true, 00:10:34.327 "data_offset": 2048, 00:10:34.327 "data_size": 63488 00:10:34.327 } 00:10:34.327 ] 00:10:34.327 }' 00:10:34.327 04:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.327 04:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.895 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:34.895 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:34.895 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:34.895 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:34.895 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:34.895 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:34.895 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:34.895 04:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.895 04:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.895 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:34.895 [2024-11-27 04:27:31.304751] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.895 04:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.895 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:34.895 "name": "Existed_Raid", 00:10:34.895 "aliases": [ 00:10:34.895 "ba6d75dd-a3c4-452d-a219-b576589920ce" 00:10:34.895 ], 00:10:34.895 "product_name": "Raid Volume", 00:10:34.895 "block_size": 512, 00:10:34.895 "num_blocks": 190464, 00:10:34.895 "uuid": "ba6d75dd-a3c4-452d-a219-b576589920ce", 00:10:34.895 "assigned_rate_limits": { 00:10:34.895 "rw_ios_per_sec": 0, 00:10:34.895 "rw_mbytes_per_sec": 0, 00:10:34.895 "r_mbytes_per_sec": 0, 00:10:34.895 "w_mbytes_per_sec": 0 00:10:34.895 }, 00:10:34.895 "claimed": false, 00:10:34.895 "zoned": false, 00:10:34.895 "supported_io_types": { 00:10:34.895 "read": true, 00:10:34.895 "write": true, 00:10:34.895 "unmap": true, 00:10:34.895 "flush": true, 00:10:34.895 "reset": true, 00:10:34.895 "nvme_admin": false, 00:10:34.895 "nvme_io": false, 00:10:34.895 "nvme_io_md": false, 00:10:34.895 "write_zeroes": true, 00:10:34.895 "zcopy": false, 00:10:34.895 "get_zone_info": false, 00:10:34.895 "zone_management": false, 00:10:34.895 "zone_append": false, 00:10:34.895 "compare": false, 00:10:34.895 "compare_and_write": false, 00:10:34.895 "abort": false, 00:10:34.895 "seek_hole": false, 00:10:34.895 "seek_data": false, 00:10:34.895 "copy": false, 00:10:34.895 "nvme_iov_md": false 00:10:34.895 }, 00:10:34.895 "memory_domains": [ 00:10:34.895 { 00:10:34.895 "dma_device_id": "system", 00:10:34.895 "dma_device_type": 1 00:10:34.895 }, 00:10:34.895 { 00:10:34.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.895 "dma_device_type": 2 00:10:34.895 }, 00:10:34.895 { 00:10:34.896 "dma_device_id": "system", 00:10:34.896 "dma_device_type": 1 00:10:34.896 }, 00:10:34.896 { 00:10:34.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.896 "dma_device_type": 2 00:10:34.896 }, 00:10:34.896 { 00:10:34.896 "dma_device_id": "system", 00:10:34.896 "dma_device_type": 1 00:10:34.896 }, 00:10:34.896 { 00:10:34.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.896 "dma_device_type": 2 00:10:34.896 } 00:10:34.896 ], 00:10:34.896 "driver_specific": { 00:10:34.896 "raid": { 00:10:34.896 "uuid": "ba6d75dd-a3c4-452d-a219-b576589920ce", 00:10:34.896 "strip_size_kb": 64, 00:10:34.896 "state": "online", 00:10:34.896 "raid_level": "raid0", 00:10:34.896 "superblock": true, 00:10:34.896 "num_base_bdevs": 3, 00:10:34.896 "num_base_bdevs_discovered": 3, 00:10:34.896 "num_base_bdevs_operational": 3, 00:10:34.896 "base_bdevs_list": [ 00:10:34.896 { 00:10:34.896 "name": "NewBaseBdev", 00:10:34.896 "uuid": "0924a47b-0b74-4b78-9c6b-c2878d2d3eca", 00:10:34.896 "is_configured": true, 00:10:34.896 "data_offset": 2048, 00:10:34.896 "data_size": 63488 00:10:34.896 }, 00:10:34.896 { 00:10:34.896 "name": "BaseBdev2", 00:10:34.896 "uuid": "0964879c-dbf2-4091-b0fc-06348a1fcadb", 00:10:34.896 "is_configured": true, 00:10:34.896 "data_offset": 2048, 00:10:34.896 "data_size": 63488 00:10:34.896 }, 00:10:34.896 { 00:10:34.896 "name": "BaseBdev3", 00:10:34.896 "uuid": "e32cad93-b3c9-4f17-a803-341208c7a5b2", 00:10:34.896 "is_configured": true, 00:10:34.896 "data_offset": 2048, 00:10:34.896 "data_size": 63488 00:10:34.896 } 00:10:34.896 ] 00:10:34.896 } 00:10:34.896 } 00:10:34.896 }' 00:10:34.896 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:34.896 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:34.896 BaseBdev2 00:10:34.896 BaseBdev3' 00:10:34.896 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.896 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:34.896 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.896 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.896 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:34.896 04:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.896 04:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.896 04:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.896 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.896 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.896 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.155 [2024-11-27 04:27:31.583968] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:35.155 [2024-11-27 04:27:31.584003] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:35.155 [2024-11-27 04:27:31.584184] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.155 [2024-11-27 04:27:31.584269] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:35.155 [2024-11-27 04:27:31.584328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64636 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64636 ']' 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64636 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64636 00:10:35.155 killing process with pid 64636 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64636' 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64636 00:10:35.155 [2024-11-27 04:27:31.633811] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:35.155 04:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64636 00:10:35.413 [2024-11-27 04:27:31.989469] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:36.790 ************************************ 00:10:36.790 END TEST raid_state_function_test_sb 00:10:36.790 ************************************ 00:10:36.790 04:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:36.790 00:10:36.790 real 0m11.356s 00:10:36.790 user 0m17.961s 00:10:36.790 sys 0m1.843s 00:10:36.790 04:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.790 04:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.790 04:27:33 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:10:36.790 04:27:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:36.790 04:27:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.790 04:27:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:36.790 ************************************ 00:10:36.790 START TEST raid_superblock_test 00:10:36.790 ************************************ 00:10:36.790 04:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:10:36.790 04:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:36.790 04:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:36.790 04:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:36.790 04:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:36.790 04:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:36.790 04:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:36.790 04:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:36.790 04:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:36.790 04:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:36.790 04:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:36.790 04:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:36.790 04:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:36.790 04:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:36.790 04:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:36.790 04:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:36.790 04:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:36.790 04:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:36.790 04:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65266 00:10:37.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.049 04:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65266 00:10:37.049 04:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65266 ']' 00:10:37.049 04:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.049 04:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.049 04:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.049 04:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.049 04:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.049 [2024-11-27 04:27:33.472765] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:37.049 [2024-11-27 04:27:33.472899] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65266 ] 00:10:37.309 [2024-11-27 04:27:33.651452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.309 [2024-11-27 04:27:33.782380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.569 [2024-11-27 04:27:34.021225] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.569 [2024-11-27 04:27:34.021302] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.830 malloc1 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.830 [2024-11-27 04:27:34.401885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:37.830 [2024-11-27 04:27:34.402046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.830 [2024-11-27 04:27:34.402076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:37.830 [2024-11-27 04:27:34.402117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.830 [2024-11-27 04:27:34.404665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.830 [2024-11-27 04:27:34.404711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:37.830 pt1 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.830 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.103 malloc2 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.103 [2024-11-27 04:27:34.465712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:38.103 [2024-11-27 04:27:34.465796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.103 [2024-11-27 04:27:34.465828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:38.103 [2024-11-27 04:27:34.465838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.103 [2024-11-27 04:27:34.468402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.103 [2024-11-27 04:27:34.468507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:38.103 pt2 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.103 malloc3 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.103 [2024-11-27 04:27:34.540968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:38.103 [2024-11-27 04:27:34.541054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.103 [2024-11-27 04:27:34.541081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:38.103 [2024-11-27 04:27:34.541110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.103 [2024-11-27 04:27:34.543640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.103 [2024-11-27 04:27:34.543782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:38.103 pt3 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.103 [2024-11-27 04:27:34.553005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:38.103 [2024-11-27 04:27:34.555148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:38.103 [2024-11-27 04:27:34.555230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:38.103 [2024-11-27 04:27:34.555413] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:38.103 [2024-11-27 04:27:34.555428] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:38.103 [2024-11-27 04:27:34.555751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:38.103 [2024-11-27 04:27:34.555940] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:38.103 [2024-11-27 04:27:34.555951] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:38.103 [2024-11-27 04:27:34.556156] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.103 "name": "raid_bdev1", 00:10:38.103 "uuid": "74b04d72-0c4f-4b91-88de-a73e97103b9b", 00:10:38.103 "strip_size_kb": 64, 00:10:38.103 "state": "online", 00:10:38.103 "raid_level": "raid0", 00:10:38.103 "superblock": true, 00:10:38.103 "num_base_bdevs": 3, 00:10:38.103 "num_base_bdevs_discovered": 3, 00:10:38.103 "num_base_bdevs_operational": 3, 00:10:38.103 "base_bdevs_list": [ 00:10:38.103 { 00:10:38.103 "name": "pt1", 00:10:38.103 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:38.103 "is_configured": true, 00:10:38.103 "data_offset": 2048, 00:10:38.103 "data_size": 63488 00:10:38.103 }, 00:10:38.103 { 00:10:38.103 "name": "pt2", 00:10:38.103 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.103 "is_configured": true, 00:10:38.103 "data_offset": 2048, 00:10:38.103 "data_size": 63488 00:10:38.103 }, 00:10:38.103 { 00:10:38.103 "name": "pt3", 00:10:38.103 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:38.103 "is_configured": true, 00:10:38.103 "data_offset": 2048, 00:10:38.103 "data_size": 63488 00:10:38.103 } 00:10:38.103 ] 00:10:38.103 }' 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.103 04:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.687 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:38.687 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:38.687 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:38.687 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:38.687 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:38.687 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:38.687 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:38.687 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:38.687 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.687 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.687 [2024-11-27 04:27:35.044493] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.687 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.687 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:38.687 "name": "raid_bdev1", 00:10:38.687 "aliases": [ 00:10:38.687 "74b04d72-0c4f-4b91-88de-a73e97103b9b" 00:10:38.687 ], 00:10:38.687 "product_name": "Raid Volume", 00:10:38.687 "block_size": 512, 00:10:38.687 "num_blocks": 190464, 00:10:38.687 "uuid": "74b04d72-0c4f-4b91-88de-a73e97103b9b", 00:10:38.687 "assigned_rate_limits": { 00:10:38.687 "rw_ios_per_sec": 0, 00:10:38.687 "rw_mbytes_per_sec": 0, 00:10:38.687 "r_mbytes_per_sec": 0, 00:10:38.687 "w_mbytes_per_sec": 0 00:10:38.687 }, 00:10:38.687 "claimed": false, 00:10:38.687 "zoned": false, 00:10:38.687 "supported_io_types": { 00:10:38.687 "read": true, 00:10:38.687 "write": true, 00:10:38.687 "unmap": true, 00:10:38.687 "flush": true, 00:10:38.687 "reset": true, 00:10:38.687 "nvme_admin": false, 00:10:38.687 "nvme_io": false, 00:10:38.687 "nvme_io_md": false, 00:10:38.687 "write_zeroes": true, 00:10:38.687 "zcopy": false, 00:10:38.687 "get_zone_info": false, 00:10:38.687 "zone_management": false, 00:10:38.687 "zone_append": false, 00:10:38.688 "compare": false, 00:10:38.688 "compare_and_write": false, 00:10:38.688 "abort": false, 00:10:38.688 "seek_hole": false, 00:10:38.688 "seek_data": false, 00:10:38.688 "copy": false, 00:10:38.688 "nvme_iov_md": false 00:10:38.688 }, 00:10:38.688 "memory_domains": [ 00:10:38.688 { 00:10:38.688 "dma_device_id": "system", 00:10:38.688 "dma_device_type": 1 00:10:38.688 }, 00:10:38.688 { 00:10:38.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.688 "dma_device_type": 2 00:10:38.688 }, 00:10:38.688 { 00:10:38.688 "dma_device_id": "system", 00:10:38.688 "dma_device_type": 1 00:10:38.688 }, 00:10:38.688 { 00:10:38.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.688 "dma_device_type": 2 00:10:38.688 }, 00:10:38.688 { 00:10:38.688 "dma_device_id": "system", 00:10:38.688 "dma_device_type": 1 00:10:38.688 }, 00:10:38.688 { 00:10:38.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.688 "dma_device_type": 2 00:10:38.688 } 00:10:38.688 ], 00:10:38.688 "driver_specific": { 00:10:38.688 "raid": { 00:10:38.688 "uuid": "74b04d72-0c4f-4b91-88de-a73e97103b9b", 00:10:38.688 "strip_size_kb": 64, 00:10:38.688 "state": "online", 00:10:38.688 "raid_level": "raid0", 00:10:38.688 "superblock": true, 00:10:38.688 "num_base_bdevs": 3, 00:10:38.688 "num_base_bdevs_discovered": 3, 00:10:38.688 "num_base_bdevs_operational": 3, 00:10:38.688 "base_bdevs_list": [ 00:10:38.688 { 00:10:38.688 "name": "pt1", 00:10:38.688 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:38.688 "is_configured": true, 00:10:38.688 "data_offset": 2048, 00:10:38.688 "data_size": 63488 00:10:38.688 }, 00:10:38.688 { 00:10:38.688 "name": "pt2", 00:10:38.688 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.688 "is_configured": true, 00:10:38.688 "data_offset": 2048, 00:10:38.688 "data_size": 63488 00:10:38.688 }, 00:10:38.688 { 00:10:38.688 "name": "pt3", 00:10:38.688 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:38.688 "is_configured": true, 00:10:38.688 "data_offset": 2048, 00:10:38.688 "data_size": 63488 00:10:38.688 } 00:10:38.688 ] 00:10:38.688 } 00:10:38.688 } 00:10:38.688 }' 00:10:38.688 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:38.688 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:38.688 pt2 00:10:38.688 pt3' 00:10:38.688 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.688 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:38.688 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.688 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.688 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:38.688 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.688 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.688 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.688 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.688 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.688 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.688 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:38.688 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.688 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.688 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.688 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.948 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.948 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.948 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.948 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:38.948 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.948 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.948 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.948 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.948 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.948 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.948 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:38.948 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:38.948 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.948 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.948 [2024-11-27 04:27:35.348020] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.948 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=74b04d72-0c4f-4b91-88de-a73e97103b9b 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 74b04d72-0c4f-4b91-88de-a73e97103b9b ']' 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.949 [2024-11-27 04:27:35.391608] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:38.949 [2024-11-27 04:27:35.391647] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:38.949 [2024-11-27 04:27:35.391752] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:38.949 [2024-11-27 04:27:35.391841] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:38.949 [2024-11-27 04:27:35.391852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:38.949 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.210 [2024-11-27 04:27:35.539448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:39.210 [2024-11-27 04:27:35.541644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:39.210 [2024-11-27 04:27:35.541773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:39.210 [2024-11-27 04:27:35.541843] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:39.210 [2024-11-27 04:27:35.541904] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:39.210 [2024-11-27 04:27:35.541928] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:39.210 [2024-11-27 04:27:35.541948] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:39.210 [2024-11-27 04:27:35.541962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:39.210 request: 00:10:39.210 { 00:10:39.210 "name": "raid_bdev1", 00:10:39.210 "raid_level": "raid0", 00:10:39.210 "base_bdevs": [ 00:10:39.210 "malloc1", 00:10:39.210 "malloc2", 00:10:39.210 "malloc3" 00:10:39.210 ], 00:10:39.210 "strip_size_kb": 64, 00:10:39.210 "superblock": false, 00:10:39.210 "method": "bdev_raid_create", 00:10:39.210 "req_id": 1 00:10:39.210 } 00:10:39.210 Got JSON-RPC error response 00:10:39.210 response: 00:10:39.210 { 00:10:39.210 "code": -17, 00:10:39.210 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:39.210 } 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.210 [2024-11-27 04:27:35.607274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:39.210 [2024-11-27 04:27:35.607422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.210 [2024-11-27 04:27:35.607468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:39.210 [2024-11-27 04:27:35.607507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.210 [2024-11-27 04:27:35.610135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.210 [2024-11-27 04:27:35.610235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:39.210 [2024-11-27 04:27:35.610374] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:39.210 [2024-11-27 04:27:35.610483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:39.210 pt1 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.210 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.211 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.211 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.211 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.211 "name": "raid_bdev1", 00:10:39.211 "uuid": "74b04d72-0c4f-4b91-88de-a73e97103b9b", 00:10:39.211 "strip_size_kb": 64, 00:10:39.211 "state": "configuring", 00:10:39.211 "raid_level": "raid0", 00:10:39.211 "superblock": true, 00:10:39.211 "num_base_bdevs": 3, 00:10:39.211 "num_base_bdevs_discovered": 1, 00:10:39.211 "num_base_bdevs_operational": 3, 00:10:39.211 "base_bdevs_list": [ 00:10:39.211 { 00:10:39.211 "name": "pt1", 00:10:39.211 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:39.211 "is_configured": true, 00:10:39.211 "data_offset": 2048, 00:10:39.211 "data_size": 63488 00:10:39.211 }, 00:10:39.211 { 00:10:39.211 "name": null, 00:10:39.211 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.211 "is_configured": false, 00:10:39.211 "data_offset": 2048, 00:10:39.211 "data_size": 63488 00:10:39.211 }, 00:10:39.211 { 00:10:39.211 "name": null, 00:10:39.211 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:39.211 "is_configured": false, 00:10:39.211 "data_offset": 2048, 00:10:39.211 "data_size": 63488 00:10:39.211 } 00:10:39.211 ] 00:10:39.211 }' 00:10:39.211 04:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.211 04:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.471 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:39.471 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:39.471 04:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.471 04:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.471 [2024-11-27 04:27:36.046557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:39.471 [2024-11-27 04:27:36.046730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.471 [2024-11-27 04:27:36.046768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:39.471 [2024-11-27 04:27:36.046779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.471 [2024-11-27 04:27:36.047360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.471 [2024-11-27 04:27:36.047417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:39.471 [2024-11-27 04:27:36.047547] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:39.471 [2024-11-27 04:27:36.047617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:39.471 pt2 00:10:39.471 04:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.471 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:39.471 04:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.471 04:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.471 [2024-11-27 04:27:36.054556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:39.731 04:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.731 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:39.731 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.731 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.731 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.731 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.731 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.731 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.731 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.731 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.731 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.731 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.731 04:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.731 04:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.731 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.731 04:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.731 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.731 "name": "raid_bdev1", 00:10:39.731 "uuid": "74b04d72-0c4f-4b91-88de-a73e97103b9b", 00:10:39.731 "strip_size_kb": 64, 00:10:39.731 "state": "configuring", 00:10:39.731 "raid_level": "raid0", 00:10:39.731 "superblock": true, 00:10:39.732 "num_base_bdevs": 3, 00:10:39.732 "num_base_bdevs_discovered": 1, 00:10:39.732 "num_base_bdevs_operational": 3, 00:10:39.732 "base_bdevs_list": [ 00:10:39.732 { 00:10:39.732 "name": "pt1", 00:10:39.732 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:39.732 "is_configured": true, 00:10:39.732 "data_offset": 2048, 00:10:39.732 "data_size": 63488 00:10:39.732 }, 00:10:39.732 { 00:10:39.732 "name": null, 00:10:39.732 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.732 "is_configured": false, 00:10:39.732 "data_offset": 0, 00:10:39.732 "data_size": 63488 00:10:39.732 }, 00:10:39.732 { 00:10:39.732 "name": null, 00:10:39.732 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:39.732 "is_configured": false, 00:10:39.732 "data_offset": 2048, 00:10:39.732 "data_size": 63488 00:10:39.732 } 00:10:39.732 ] 00:10:39.732 }' 00:10:39.732 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.732 04:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.991 [2024-11-27 04:27:36.521719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:39.991 [2024-11-27 04:27:36.521890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.991 [2024-11-27 04:27:36.521946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:39.991 [2024-11-27 04:27:36.521991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.991 [2024-11-27 04:27:36.522564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.991 [2024-11-27 04:27:36.522637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:39.991 [2024-11-27 04:27:36.522761] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:39.991 [2024-11-27 04:27:36.522822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:39.991 pt2 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.991 [2024-11-27 04:27:36.533709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:39.991 [2024-11-27 04:27:36.533861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.991 [2024-11-27 04:27:36.533901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:39.991 [2024-11-27 04:27:36.533943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.991 [2024-11-27 04:27:36.534501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.991 [2024-11-27 04:27:36.534577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:39.991 [2024-11-27 04:27:36.534695] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:39.991 [2024-11-27 04:27:36.534756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:39.991 [2024-11-27 04:27:36.534928] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:39.991 [2024-11-27 04:27:36.534974] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:39.991 [2024-11-27 04:27:36.535311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:39.991 [2024-11-27 04:27:36.535530] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:39.991 [2024-11-27 04:27:36.535574] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:39.991 [2024-11-27 04:27:36.535803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.991 pt3 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.991 04:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.251 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.251 "name": "raid_bdev1", 00:10:40.251 "uuid": "74b04d72-0c4f-4b91-88de-a73e97103b9b", 00:10:40.251 "strip_size_kb": 64, 00:10:40.251 "state": "online", 00:10:40.251 "raid_level": "raid0", 00:10:40.251 "superblock": true, 00:10:40.251 "num_base_bdevs": 3, 00:10:40.251 "num_base_bdevs_discovered": 3, 00:10:40.251 "num_base_bdevs_operational": 3, 00:10:40.251 "base_bdevs_list": [ 00:10:40.251 { 00:10:40.251 "name": "pt1", 00:10:40.251 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:40.251 "is_configured": true, 00:10:40.251 "data_offset": 2048, 00:10:40.251 "data_size": 63488 00:10:40.251 }, 00:10:40.251 { 00:10:40.251 "name": "pt2", 00:10:40.251 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:40.251 "is_configured": true, 00:10:40.251 "data_offset": 2048, 00:10:40.251 "data_size": 63488 00:10:40.251 }, 00:10:40.251 { 00:10:40.251 "name": "pt3", 00:10:40.251 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:40.251 "is_configured": true, 00:10:40.251 "data_offset": 2048, 00:10:40.251 "data_size": 63488 00:10:40.251 } 00:10:40.251 ] 00:10:40.251 }' 00:10:40.251 04:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.251 04:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.511 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:40.511 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:40.511 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:40.511 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:40.511 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:40.511 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:40.511 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:40.511 04:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.511 04:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.511 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:40.511 [2024-11-27 04:27:37.025300] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:40.511 04:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.511 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:40.511 "name": "raid_bdev1", 00:10:40.511 "aliases": [ 00:10:40.511 "74b04d72-0c4f-4b91-88de-a73e97103b9b" 00:10:40.511 ], 00:10:40.511 "product_name": "Raid Volume", 00:10:40.511 "block_size": 512, 00:10:40.511 "num_blocks": 190464, 00:10:40.511 "uuid": "74b04d72-0c4f-4b91-88de-a73e97103b9b", 00:10:40.511 "assigned_rate_limits": { 00:10:40.511 "rw_ios_per_sec": 0, 00:10:40.511 "rw_mbytes_per_sec": 0, 00:10:40.511 "r_mbytes_per_sec": 0, 00:10:40.511 "w_mbytes_per_sec": 0 00:10:40.511 }, 00:10:40.511 "claimed": false, 00:10:40.511 "zoned": false, 00:10:40.511 "supported_io_types": { 00:10:40.511 "read": true, 00:10:40.511 "write": true, 00:10:40.511 "unmap": true, 00:10:40.511 "flush": true, 00:10:40.511 "reset": true, 00:10:40.511 "nvme_admin": false, 00:10:40.511 "nvme_io": false, 00:10:40.511 "nvme_io_md": false, 00:10:40.511 "write_zeroes": true, 00:10:40.511 "zcopy": false, 00:10:40.511 "get_zone_info": false, 00:10:40.511 "zone_management": false, 00:10:40.511 "zone_append": false, 00:10:40.511 "compare": false, 00:10:40.511 "compare_and_write": false, 00:10:40.511 "abort": false, 00:10:40.511 "seek_hole": false, 00:10:40.511 "seek_data": false, 00:10:40.511 "copy": false, 00:10:40.511 "nvme_iov_md": false 00:10:40.511 }, 00:10:40.511 "memory_domains": [ 00:10:40.511 { 00:10:40.511 "dma_device_id": "system", 00:10:40.511 "dma_device_type": 1 00:10:40.511 }, 00:10:40.511 { 00:10:40.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.511 "dma_device_type": 2 00:10:40.511 }, 00:10:40.511 { 00:10:40.511 "dma_device_id": "system", 00:10:40.511 "dma_device_type": 1 00:10:40.511 }, 00:10:40.511 { 00:10:40.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.511 "dma_device_type": 2 00:10:40.511 }, 00:10:40.511 { 00:10:40.511 "dma_device_id": "system", 00:10:40.511 "dma_device_type": 1 00:10:40.511 }, 00:10:40.511 { 00:10:40.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.511 "dma_device_type": 2 00:10:40.511 } 00:10:40.511 ], 00:10:40.511 "driver_specific": { 00:10:40.511 "raid": { 00:10:40.511 "uuid": "74b04d72-0c4f-4b91-88de-a73e97103b9b", 00:10:40.511 "strip_size_kb": 64, 00:10:40.511 "state": "online", 00:10:40.511 "raid_level": "raid0", 00:10:40.511 "superblock": true, 00:10:40.511 "num_base_bdevs": 3, 00:10:40.511 "num_base_bdevs_discovered": 3, 00:10:40.511 "num_base_bdevs_operational": 3, 00:10:40.511 "base_bdevs_list": [ 00:10:40.511 { 00:10:40.511 "name": "pt1", 00:10:40.511 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:40.511 "is_configured": true, 00:10:40.511 "data_offset": 2048, 00:10:40.511 "data_size": 63488 00:10:40.511 }, 00:10:40.511 { 00:10:40.511 "name": "pt2", 00:10:40.511 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:40.511 "is_configured": true, 00:10:40.511 "data_offset": 2048, 00:10:40.511 "data_size": 63488 00:10:40.511 }, 00:10:40.511 { 00:10:40.511 "name": "pt3", 00:10:40.511 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:40.511 "is_configured": true, 00:10:40.511 "data_offset": 2048, 00:10:40.511 "data_size": 63488 00:10:40.511 } 00:10:40.511 ] 00:10:40.511 } 00:10:40.511 } 00:10:40.511 }' 00:10:40.511 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:40.772 pt2 00:10:40.772 pt3' 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.772 [2024-11-27 04:27:37.316764] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 74b04d72-0c4f-4b91-88de-a73e97103b9b '!=' 74b04d72-0c4f-4b91-88de-a73e97103b9b ']' 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65266 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65266 ']' 00:10:40.772 04:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65266 00:10:41.032 04:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:41.032 04:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.032 04:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65266 00:10:41.032 04:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.032 04:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.032 04:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65266' 00:10:41.032 killing process with pid 65266 00:10:41.032 04:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65266 00:10:41.032 [2024-11-27 04:27:37.390784] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:41.032 [2024-11-27 04:27:37.390920] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.032 04:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65266 00:10:41.032 [2024-11-27 04:27:37.390991] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:41.032 [2024-11-27 04:27:37.391005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:41.292 [2024-11-27 04:27:37.738627] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:42.673 04:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:42.673 00:10:42.673 real 0m5.647s 00:10:42.673 user 0m8.042s 00:10:42.673 sys 0m0.948s 00:10:42.673 04:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.673 04:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.673 ************************************ 00:10:42.673 END TEST raid_superblock_test 00:10:42.673 ************************************ 00:10:42.673 04:27:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:10:42.673 04:27:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:42.673 04:27:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.673 04:27:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:42.673 ************************************ 00:10:42.673 START TEST raid_read_error_test 00:10:42.673 ************************************ 00:10:42.673 04:27:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:10:42.673 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:42.673 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:42.673 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:42.673 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:42.673 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.673 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:42.673 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3N7dcdKLDF 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65526 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65526 00:10:42.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65526 ']' 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.674 04:27:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.674 [2024-11-27 04:27:39.202355] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:42.674 [2024-11-27 04:27:39.202509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65526 ] 00:10:42.935 [2024-11-27 04:27:39.382726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.935 [2024-11-27 04:27:39.518233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.194 [2024-11-27 04:27:39.753649] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.194 [2024-11-27 04:27:39.753726] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.764 BaseBdev1_malloc 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.764 true 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.764 [2024-11-27 04:27:40.171199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:43.764 [2024-11-27 04:27:40.171290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.764 [2024-11-27 04:27:40.171318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:43.764 [2024-11-27 04:27:40.171331] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.764 [2024-11-27 04:27:40.173889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.764 [2024-11-27 04:27:40.173950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:43.764 BaseBdev1 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.764 BaseBdev2_malloc 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.764 true 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.764 [2024-11-27 04:27:40.244949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:43.764 [2024-11-27 04:27:40.245036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.764 [2024-11-27 04:27:40.245061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:43.764 [2024-11-27 04:27:40.245074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.764 [2024-11-27 04:27:40.247680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.764 [2024-11-27 04:27:40.247874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:43.764 BaseBdev2 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.764 BaseBdev3_malloc 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.764 true 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.764 [2024-11-27 04:27:40.331801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:43.764 [2024-11-27 04:27:40.331884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.764 [2024-11-27 04:27:40.331910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:43.764 [2024-11-27 04:27:40.331923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.764 [2024-11-27 04:27:40.334538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.764 [2024-11-27 04:27:40.334655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:43.764 BaseBdev3 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.764 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.764 [2024-11-27 04:27:40.343894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.764 [2024-11-27 04:27:40.346056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:43.764 [2024-11-27 04:27:40.346245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:43.764 [2024-11-27 04:27:40.346507] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:43.764 [2024-11-27 04:27:40.346526] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:43.764 [2024-11-27 04:27:40.346862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:43.764 [2024-11-27 04:27:40.347056] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:43.764 [2024-11-27 04:27:40.347072] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:43.764 [2024-11-27 04:27:40.347298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.024 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.024 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:44.024 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.024 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.024 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.024 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.024 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.024 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.024 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.024 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.024 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.024 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.024 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.024 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.024 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.024 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.024 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.024 "name": "raid_bdev1", 00:10:44.024 "uuid": "690b76ce-a7e9-4f87-af35-bb9462cf275c", 00:10:44.024 "strip_size_kb": 64, 00:10:44.024 "state": "online", 00:10:44.024 "raid_level": "raid0", 00:10:44.024 "superblock": true, 00:10:44.024 "num_base_bdevs": 3, 00:10:44.024 "num_base_bdevs_discovered": 3, 00:10:44.024 "num_base_bdevs_operational": 3, 00:10:44.024 "base_bdevs_list": [ 00:10:44.024 { 00:10:44.024 "name": "BaseBdev1", 00:10:44.024 "uuid": "e053d1b9-1475-58b1-bbf7-2c84cf6f007d", 00:10:44.024 "is_configured": true, 00:10:44.024 "data_offset": 2048, 00:10:44.024 "data_size": 63488 00:10:44.024 }, 00:10:44.024 { 00:10:44.024 "name": "BaseBdev2", 00:10:44.024 "uuid": "f76fe28c-0660-52bb-b110-d9628a6e601c", 00:10:44.024 "is_configured": true, 00:10:44.024 "data_offset": 2048, 00:10:44.024 "data_size": 63488 00:10:44.024 }, 00:10:44.024 { 00:10:44.024 "name": "BaseBdev3", 00:10:44.024 "uuid": "c0bcb763-be5b-5a4b-82bf-d2c9e4e25b8c", 00:10:44.024 "is_configured": true, 00:10:44.024 "data_offset": 2048, 00:10:44.024 "data_size": 63488 00:10:44.024 } 00:10:44.024 ] 00:10:44.024 }' 00:10:44.024 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.024 04:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.284 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:44.284 04:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:44.545 [2024-11-27 04:27:40.904438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:45.519 04:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:45.519 04:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.519 04:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.519 04:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.519 04:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:45.519 04:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:45.519 04:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:45.519 04:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:45.519 04:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.519 04:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.519 04:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.519 04:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.519 04:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.519 04:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.519 04:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.519 04:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.520 04:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.520 04:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.520 04:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.520 04:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.520 04:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.520 04:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.520 04:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.520 "name": "raid_bdev1", 00:10:45.520 "uuid": "690b76ce-a7e9-4f87-af35-bb9462cf275c", 00:10:45.520 "strip_size_kb": 64, 00:10:45.520 "state": "online", 00:10:45.520 "raid_level": "raid0", 00:10:45.520 "superblock": true, 00:10:45.520 "num_base_bdevs": 3, 00:10:45.520 "num_base_bdevs_discovered": 3, 00:10:45.520 "num_base_bdevs_operational": 3, 00:10:45.520 "base_bdevs_list": [ 00:10:45.520 { 00:10:45.520 "name": "BaseBdev1", 00:10:45.520 "uuid": "e053d1b9-1475-58b1-bbf7-2c84cf6f007d", 00:10:45.520 "is_configured": true, 00:10:45.520 "data_offset": 2048, 00:10:45.520 "data_size": 63488 00:10:45.520 }, 00:10:45.520 { 00:10:45.520 "name": "BaseBdev2", 00:10:45.520 "uuid": "f76fe28c-0660-52bb-b110-d9628a6e601c", 00:10:45.520 "is_configured": true, 00:10:45.520 "data_offset": 2048, 00:10:45.520 "data_size": 63488 00:10:45.520 }, 00:10:45.520 { 00:10:45.520 "name": "BaseBdev3", 00:10:45.520 "uuid": "c0bcb763-be5b-5a4b-82bf-d2c9e4e25b8c", 00:10:45.520 "is_configured": true, 00:10:45.520 "data_offset": 2048, 00:10:45.520 "data_size": 63488 00:10:45.520 } 00:10:45.520 ] 00:10:45.520 }' 00:10:45.520 04:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.520 04:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.780 04:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:45.780 04:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.780 04:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.780 [2024-11-27 04:27:42.269600] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:45.780 [2024-11-27 04:27:42.269731] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:45.780 [2024-11-27 04:27:42.273162] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.780 [2024-11-27 04:27:42.273286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.780 [2024-11-27 04:27:42.273353] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:45.780 [2024-11-27 04:27:42.273406] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:45.780 { 00:10:45.780 "results": [ 00:10:45.780 { 00:10:45.780 "job": "raid_bdev1", 00:10:45.780 "core_mask": "0x1", 00:10:45.780 "workload": "randrw", 00:10:45.780 "percentage": 50, 00:10:45.780 "status": "finished", 00:10:45.780 "queue_depth": 1, 00:10:45.780 "io_size": 131072, 00:10:45.780 "runtime": 1.365867, 00:10:45.780 "iops": 13095.711368676453, 00:10:45.780 "mibps": 1636.9639210845567, 00:10:45.780 "io_failed": 1, 00:10:45.780 "io_timeout": 0, 00:10:45.780 "avg_latency_us": 105.63731241846403, 00:10:45.780 "min_latency_us": 27.388646288209607, 00:10:45.780 "max_latency_us": 1752.8733624454148 00:10:45.780 } 00:10:45.780 ], 00:10:45.780 "core_count": 1 00:10:45.780 } 00:10:45.780 04:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.780 04:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65526 00:10:45.780 04:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65526 ']' 00:10:45.780 04:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65526 00:10:45.780 04:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:45.780 04:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:45.780 04:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65526 00:10:45.780 killing process with pid 65526 00:10:45.780 04:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:45.780 04:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:45.780 04:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65526' 00:10:45.780 04:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65526 00:10:45.780 [2024-11-27 04:27:42.309928] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:45.780 04:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65526 00:10:46.040 [2024-11-27 04:27:42.579293] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:47.420 04:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:47.420 04:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3N7dcdKLDF 00:10:47.420 04:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:47.420 04:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:47.420 04:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:47.420 ************************************ 00:10:47.420 END TEST raid_read_error_test 00:10:47.420 04:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:47.420 04:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:47.420 04:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:47.420 00:10:47.420 real 0m4.847s 00:10:47.420 user 0m5.767s 00:10:47.420 sys 0m0.576s 00:10:47.420 04:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.420 04:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.420 ************************************ 00:10:47.420 04:27:43 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:10:47.420 04:27:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:47.420 04:27:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.420 04:27:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:47.421 ************************************ 00:10:47.421 START TEST raid_write_error_test 00:10:47.421 ************************************ 00:10:47.421 04:27:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:10:47.421 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:47.421 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:47.421 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:47.680 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:47.680 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.680 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:47.680 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.680 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.680 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:47.680 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.680 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.680 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:47.680 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.680 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.680 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:47.680 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:47.680 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:47.680 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:47.680 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:47.680 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:47.680 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:47.680 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:47.680 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:47.681 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:47.681 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:47.681 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TldfUWm8y8 00:10:47.681 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65672 00:10:47.681 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:47.681 04:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65672 00:10:47.681 04:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65672 ']' 00:10:47.681 04:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.681 04:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.681 04:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.681 04:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.681 04:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.681 [2024-11-27 04:27:44.116380] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:47.681 [2024-11-27 04:27:44.116515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65672 ] 00:10:47.941 [2024-11-27 04:27:44.281315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.941 [2024-11-27 04:27:44.410871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.200 [2024-11-27 04:27:44.623504] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.200 [2024-11-27 04:27:44.623559] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.770 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.770 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:48.770 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:48.770 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:48.770 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.770 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.770 BaseBdev1_malloc 00:10:48.770 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.770 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:48.770 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.770 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.770 true 00:10:48.770 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.770 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:48.770 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.770 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.770 [2024-11-27 04:27:45.107496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:48.770 [2024-11-27 04:27:45.107582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.770 [2024-11-27 04:27:45.107610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:48.770 [2024-11-27 04:27:45.107623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.770 [2024-11-27 04:27:45.110040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.771 [2024-11-27 04:27:45.110178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:48.771 BaseBdev1 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.771 BaseBdev2_malloc 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.771 true 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.771 [2024-11-27 04:27:45.180840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:48.771 [2024-11-27 04:27:45.181024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.771 [2024-11-27 04:27:45.181055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:48.771 [2024-11-27 04:27:45.181070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.771 [2024-11-27 04:27:45.183780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.771 [2024-11-27 04:27:45.183837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:48.771 BaseBdev2 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.771 BaseBdev3_malloc 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.771 true 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.771 [2024-11-27 04:27:45.264511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:48.771 [2024-11-27 04:27:45.264600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.771 [2024-11-27 04:27:45.264626] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:48.771 [2024-11-27 04:27:45.264639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.771 [2024-11-27 04:27:45.267270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.771 [2024-11-27 04:27:45.267324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:48.771 BaseBdev3 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.771 [2024-11-27 04:27:45.276600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:48.771 [2024-11-27 04:27:45.278992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:48.771 [2024-11-27 04:27:45.279152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:48.771 [2024-11-27 04:27:45.279472] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:48.771 [2024-11-27 04:27:45.279507] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:48.771 [2024-11-27 04:27:45.279961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:48.771 [2024-11-27 04:27:45.280375] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:48.771 [2024-11-27 04:27:45.280462] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:48.771 [2024-11-27 04:27:45.280908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.771 "name": "raid_bdev1", 00:10:48.771 "uuid": "adc25f13-6165-4bb6-a915-b83bb451ec25", 00:10:48.771 "strip_size_kb": 64, 00:10:48.771 "state": "online", 00:10:48.771 "raid_level": "raid0", 00:10:48.771 "superblock": true, 00:10:48.771 "num_base_bdevs": 3, 00:10:48.771 "num_base_bdevs_discovered": 3, 00:10:48.771 "num_base_bdevs_operational": 3, 00:10:48.771 "base_bdevs_list": [ 00:10:48.771 { 00:10:48.771 "name": "BaseBdev1", 00:10:48.771 "uuid": "7af2dbc5-03bc-5901-a388-892f68ae3938", 00:10:48.771 "is_configured": true, 00:10:48.771 "data_offset": 2048, 00:10:48.771 "data_size": 63488 00:10:48.771 }, 00:10:48.771 { 00:10:48.771 "name": "BaseBdev2", 00:10:48.771 "uuid": "a7e5ca1e-5ddd-57a4-a92d-80f20aa08fd3", 00:10:48.771 "is_configured": true, 00:10:48.771 "data_offset": 2048, 00:10:48.771 "data_size": 63488 00:10:48.771 }, 00:10:48.771 { 00:10:48.771 "name": "BaseBdev3", 00:10:48.771 "uuid": "2bbdc0fb-9376-5dfe-9788-934fe55f493d", 00:10:48.771 "is_configured": true, 00:10:48.771 "data_offset": 2048, 00:10:48.771 "data_size": 63488 00:10:48.771 } 00:10:48.771 ] 00:10:48.771 }' 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.771 04:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.340 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:49.340 04:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:49.340 [2024-11-27 04:27:45.857412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:50.279 04:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:50.279 04:27:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.279 04:27:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.279 04:27:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.279 04:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:50.279 04:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:50.279 04:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:50.279 04:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:50.279 04:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.279 04:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.279 04:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.279 04:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.279 04:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.279 04:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.279 04:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.279 04:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.279 04:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.279 04:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.279 04:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.279 04:27:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.279 04:27:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.279 04:27:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.279 04:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.279 "name": "raid_bdev1", 00:10:50.279 "uuid": "adc25f13-6165-4bb6-a915-b83bb451ec25", 00:10:50.279 "strip_size_kb": 64, 00:10:50.279 "state": "online", 00:10:50.279 "raid_level": "raid0", 00:10:50.279 "superblock": true, 00:10:50.279 "num_base_bdevs": 3, 00:10:50.279 "num_base_bdevs_discovered": 3, 00:10:50.279 "num_base_bdevs_operational": 3, 00:10:50.279 "base_bdevs_list": [ 00:10:50.279 { 00:10:50.279 "name": "BaseBdev1", 00:10:50.279 "uuid": "7af2dbc5-03bc-5901-a388-892f68ae3938", 00:10:50.279 "is_configured": true, 00:10:50.279 "data_offset": 2048, 00:10:50.279 "data_size": 63488 00:10:50.279 }, 00:10:50.279 { 00:10:50.279 "name": "BaseBdev2", 00:10:50.279 "uuid": "a7e5ca1e-5ddd-57a4-a92d-80f20aa08fd3", 00:10:50.279 "is_configured": true, 00:10:50.279 "data_offset": 2048, 00:10:50.279 "data_size": 63488 00:10:50.279 }, 00:10:50.279 { 00:10:50.279 "name": "BaseBdev3", 00:10:50.279 "uuid": "2bbdc0fb-9376-5dfe-9788-934fe55f493d", 00:10:50.279 "is_configured": true, 00:10:50.279 "data_offset": 2048, 00:10:50.279 "data_size": 63488 00:10:50.279 } 00:10:50.279 ] 00:10:50.279 }' 00:10:50.279 04:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.279 04:27:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.849 04:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:50.849 04:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.849 04:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.849 [2024-11-27 04:27:47.206554] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:50.849 [2024-11-27 04:27:47.206672] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:50.849 [2024-11-27 04:27:47.210086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.849 [2024-11-27 04:27:47.210228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.849 [2024-11-27 04:27:47.210299] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:50.849 [2024-11-27 04:27:47.210360] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:50.849 04:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.849 04:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65672 00:10:50.849 { 00:10:50.849 "results": [ 00:10:50.849 { 00:10:50.849 "job": "raid_bdev1", 00:10:50.849 "core_mask": "0x1", 00:10:50.849 "workload": "randrw", 00:10:50.849 "percentage": 50, 00:10:50.849 "status": "finished", 00:10:50.849 "queue_depth": 1, 00:10:50.849 "io_size": 131072, 00:10:50.849 "runtime": 1.349934, 00:10:50.849 "iops": 13135.456992712236, 00:10:50.849 "mibps": 1641.9321240890295, 00:10:50.849 "io_failed": 1, 00:10:50.849 "io_timeout": 0, 00:10:50.849 "avg_latency_us": 105.34804057370155, 00:10:50.849 "min_latency_us": 26.717903930131005, 00:10:50.849 "max_latency_us": 1752.8733624454148 00:10:50.849 } 00:10:50.849 ], 00:10:50.849 "core_count": 1 00:10:50.849 } 00:10:50.849 04:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65672 ']' 00:10:50.849 04:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65672 00:10:50.849 04:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:50.849 04:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.849 04:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65672 00:10:50.849 04:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:50.849 04:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:50.849 04:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65672' 00:10:50.849 killing process with pid 65672 00:10:50.849 04:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65672 00:10:50.849 04:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65672 00:10:50.849 [2024-11-27 04:27:47.249112] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:51.109 [2024-11-27 04:27:47.516102] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:52.585 04:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:52.585 04:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TldfUWm8y8 00:10:52.585 04:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:52.585 ************************************ 00:10:52.585 END TEST raid_write_error_test 00:10:52.585 ************************************ 00:10:52.585 04:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:52.585 04:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:52.585 04:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:52.585 04:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:52.585 04:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:52.585 00:10:52.585 real 0m4.889s 00:10:52.585 user 0m5.833s 00:10:52.585 sys 0m0.567s 00:10:52.585 04:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.585 04:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.585 04:27:48 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:52.585 04:27:48 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:10:52.585 04:27:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:52.585 04:27:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.585 04:27:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:52.585 ************************************ 00:10:52.585 START TEST raid_state_function_test 00:10:52.585 ************************************ 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:52.585 Process raid pid: 65816 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65816 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65816' 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65816 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65816 ']' 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.585 04:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.585 [2024-11-27 04:27:49.046587] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:52.585 [2024-11-27 04:27:49.046722] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.843 [2024-11-27 04:27:49.211549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.843 [2024-11-27 04:27:49.349639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.101 [2024-11-27 04:27:49.587321] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.101 [2024-11-27 04:27:49.587372] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.360 04:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.360 04:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:53.360 04:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:53.360 04:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.360 04:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.360 [2024-11-27 04:27:49.942887] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:53.360 [2024-11-27 04:27:49.943037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:53.360 [2024-11-27 04:27:49.943077] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:53.360 [2024-11-27 04:27:49.943117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:53.360 [2024-11-27 04:27:49.943142] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:53.360 [2024-11-27 04:27:49.943168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:53.619 04:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.620 04:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:53.620 04:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.620 04:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.620 04:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.620 04:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.620 04:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.620 04:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.620 04:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.620 04:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.620 04:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.620 04:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.620 04:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.620 04:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.620 04:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.620 04:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.620 04:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.620 "name": "Existed_Raid", 00:10:53.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.620 "strip_size_kb": 64, 00:10:53.620 "state": "configuring", 00:10:53.620 "raid_level": "concat", 00:10:53.620 "superblock": false, 00:10:53.620 "num_base_bdevs": 3, 00:10:53.620 "num_base_bdevs_discovered": 0, 00:10:53.620 "num_base_bdevs_operational": 3, 00:10:53.620 "base_bdevs_list": [ 00:10:53.620 { 00:10:53.620 "name": "BaseBdev1", 00:10:53.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.620 "is_configured": false, 00:10:53.620 "data_offset": 0, 00:10:53.620 "data_size": 0 00:10:53.620 }, 00:10:53.620 { 00:10:53.620 "name": "BaseBdev2", 00:10:53.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.620 "is_configured": false, 00:10:53.620 "data_offset": 0, 00:10:53.620 "data_size": 0 00:10:53.620 }, 00:10:53.620 { 00:10:53.620 "name": "BaseBdev3", 00:10:53.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.620 "is_configured": false, 00:10:53.620 "data_offset": 0, 00:10:53.620 "data_size": 0 00:10:53.620 } 00:10:53.620 ] 00:10:53.620 }' 00:10:53.620 04:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.620 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.879 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:53.880 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.880 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.880 [2024-11-27 04:27:50.398090] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:53.880 [2024-11-27 04:27:50.398150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:53.880 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.880 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:53.880 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.880 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.880 [2024-11-27 04:27:50.406125] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:53.880 [2024-11-27 04:27:50.406190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:53.880 [2024-11-27 04:27:50.406200] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:53.880 [2024-11-27 04:27:50.406227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:53.880 [2024-11-27 04:27:50.406235] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:53.880 [2024-11-27 04:27:50.406246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:53.880 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.880 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:53.880 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.880 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.880 [2024-11-27 04:27:50.456414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:53.880 BaseBdev1 00:10:53.880 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.880 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:53.880 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:53.880 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:53.880 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:53.880 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:53.880 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:53.880 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:53.880 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.880 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.138 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.138 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:54.138 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.138 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.138 [ 00:10:54.138 { 00:10:54.138 "name": "BaseBdev1", 00:10:54.139 "aliases": [ 00:10:54.139 "a8361dd6-7c91-492a-8439-14df4ba4045a" 00:10:54.139 ], 00:10:54.139 "product_name": "Malloc disk", 00:10:54.139 "block_size": 512, 00:10:54.139 "num_blocks": 65536, 00:10:54.139 "uuid": "a8361dd6-7c91-492a-8439-14df4ba4045a", 00:10:54.139 "assigned_rate_limits": { 00:10:54.139 "rw_ios_per_sec": 0, 00:10:54.139 "rw_mbytes_per_sec": 0, 00:10:54.139 "r_mbytes_per_sec": 0, 00:10:54.139 "w_mbytes_per_sec": 0 00:10:54.139 }, 00:10:54.139 "claimed": true, 00:10:54.139 "claim_type": "exclusive_write", 00:10:54.139 "zoned": false, 00:10:54.139 "supported_io_types": { 00:10:54.139 "read": true, 00:10:54.139 "write": true, 00:10:54.139 "unmap": true, 00:10:54.139 "flush": true, 00:10:54.139 "reset": true, 00:10:54.139 "nvme_admin": false, 00:10:54.139 "nvme_io": false, 00:10:54.139 "nvme_io_md": false, 00:10:54.139 "write_zeroes": true, 00:10:54.139 "zcopy": true, 00:10:54.139 "get_zone_info": false, 00:10:54.139 "zone_management": false, 00:10:54.139 "zone_append": false, 00:10:54.139 "compare": false, 00:10:54.139 "compare_and_write": false, 00:10:54.139 "abort": true, 00:10:54.139 "seek_hole": false, 00:10:54.139 "seek_data": false, 00:10:54.139 "copy": true, 00:10:54.139 "nvme_iov_md": false 00:10:54.139 }, 00:10:54.139 "memory_domains": [ 00:10:54.139 { 00:10:54.139 "dma_device_id": "system", 00:10:54.139 "dma_device_type": 1 00:10:54.139 }, 00:10:54.139 { 00:10:54.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.139 "dma_device_type": 2 00:10:54.139 } 00:10:54.139 ], 00:10:54.139 "driver_specific": {} 00:10:54.139 } 00:10:54.139 ] 00:10:54.139 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.139 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:54.139 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:54.139 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.139 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.139 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.139 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.139 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.139 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.139 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.139 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.139 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.139 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.139 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.139 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.139 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.139 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.139 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.139 "name": "Existed_Raid", 00:10:54.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.139 "strip_size_kb": 64, 00:10:54.139 "state": "configuring", 00:10:54.139 "raid_level": "concat", 00:10:54.139 "superblock": false, 00:10:54.139 "num_base_bdevs": 3, 00:10:54.139 "num_base_bdevs_discovered": 1, 00:10:54.139 "num_base_bdevs_operational": 3, 00:10:54.139 "base_bdevs_list": [ 00:10:54.139 { 00:10:54.139 "name": "BaseBdev1", 00:10:54.139 "uuid": "a8361dd6-7c91-492a-8439-14df4ba4045a", 00:10:54.139 "is_configured": true, 00:10:54.139 "data_offset": 0, 00:10:54.139 "data_size": 65536 00:10:54.139 }, 00:10:54.139 { 00:10:54.139 "name": "BaseBdev2", 00:10:54.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.139 "is_configured": false, 00:10:54.139 "data_offset": 0, 00:10:54.139 "data_size": 0 00:10:54.139 }, 00:10:54.139 { 00:10:54.139 "name": "BaseBdev3", 00:10:54.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.139 "is_configured": false, 00:10:54.139 "data_offset": 0, 00:10:54.139 "data_size": 0 00:10:54.139 } 00:10:54.139 ] 00:10:54.139 }' 00:10:54.139 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.139 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.398 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:54.398 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.398 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.398 [2024-11-27 04:27:50.935782] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:54.398 [2024-11-27 04:27:50.935929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:54.398 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.398 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:54.398 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.398 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.398 [2024-11-27 04:27:50.943909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.398 [2024-11-27 04:27:50.946050] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:54.398 [2024-11-27 04:27:50.946174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:54.398 [2024-11-27 04:27:50.946227] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:54.398 [2024-11-27 04:27:50.946256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:54.398 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.398 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:54.398 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.398 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:54.398 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.398 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.398 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.398 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.398 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.398 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.398 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.398 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.398 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.398 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.398 04:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.398 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.398 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.398 04:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.659 04:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.659 "name": "Existed_Raid", 00:10:54.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.659 "strip_size_kb": 64, 00:10:54.659 "state": "configuring", 00:10:54.659 "raid_level": "concat", 00:10:54.659 "superblock": false, 00:10:54.659 "num_base_bdevs": 3, 00:10:54.659 "num_base_bdevs_discovered": 1, 00:10:54.659 "num_base_bdevs_operational": 3, 00:10:54.659 "base_bdevs_list": [ 00:10:54.659 { 00:10:54.659 "name": "BaseBdev1", 00:10:54.659 "uuid": "a8361dd6-7c91-492a-8439-14df4ba4045a", 00:10:54.659 "is_configured": true, 00:10:54.659 "data_offset": 0, 00:10:54.659 "data_size": 65536 00:10:54.659 }, 00:10:54.659 { 00:10:54.659 "name": "BaseBdev2", 00:10:54.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.659 "is_configured": false, 00:10:54.659 "data_offset": 0, 00:10:54.659 "data_size": 0 00:10:54.659 }, 00:10:54.659 { 00:10:54.659 "name": "BaseBdev3", 00:10:54.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.659 "is_configured": false, 00:10:54.659 "data_offset": 0, 00:10:54.659 "data_size": 0 00:10:54.659 } 00:10:54.659 ] 00:10:54.659 }' 00:10:54.659 04:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.659 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.919 [2024-11-27 04:27:51.468326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.919 BaseBdev2 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.919 [ 00:10:54.919 { 00:10:54.919 "name": "BaseBdev2", 00:10:54.919 "aliases": [ 00:10:54.919 "a5beb26b-5c7a-4fc3-ad11-29ba83908a06" 00:10:54.919 ], 00:10:54.919 "product_name": "Malloc disk", 00:10:54.919 "block_size": 512, 00:10:54.919 "num_blocks": 65536, 00:10:54.919 "uuid": "a5beb26b-5c7a-4fc3-ad11-29ba83908a06", 00:10:54.919 "assigned_rate_limits": { 00:10:54.919 "rw_ios_per_sec": 0, 00:10:54.919 "rw_mbytes_per_sec": 0, 00:10:54.919 "r_mbytes_per_sec": 0, 00:10:54.919 "w_mbytes_per_sec": 0 00:10:54.919 }, 00:10:54.919 "claimed": true, 00:10:54.919 "claim_type": "exclusive_write", 00:10:54.919 "zoned": false, 00:10:54.919 "supported_io_types": { 00:10:54.919 "read": true, 00:10:54.919 "write": true, 00:10:54.919 "unmap": true, 00:10:54.919 "flush": true, 00:10:54.919 "reset": true, 00:10:54.919 "nvme_admin": false, 00:10:54.919 "nvme_io": false, 00:10:54.919 "nvme_io_md": false, 00:10:54.919 "write_zeroes": true, 00:10:54.919 "zcopy": true, 00:10:54.919 "get_zone_info": false, 00:10:54.919 "zone_management": false, 00:10:54.919 "zone_append": false, 00:10:54.919 "compare": false, 00:10:54.919 "compare_and_write": false, 00:10:54.919 "abort": true, 00:10:54.919 "seek_hole": false, 00:10:54.919 "seek_data": false, 00:10:54.919 "copy": true, 00:10:54.919 "nvme_iov_md": false 00:10:54.919 }, 00:10:54.919 "memory_domains": [ 00:10:54.919 { 00:10:54.919 "dma_device_id": "system", 00:10:54.919 "dma_device_type": 1 00:10:54.919 }, 00:10:54.919 { 00:10:54.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.919 "dma_device_type": 2 00:10:54.919 } 00:10:54.919 ], 00:10:54.919 "driver_specific": {} 00:10:54.919 } 00:10:54.919 ] 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.919 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.179 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.179 04:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.179 "name": "Existed_Raid", 00:10:55.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.179 "strip_size_kb": 64, 00:10:55.179 "state": "configuring", 00:10:55.179 "raid_level": "concat", 00:10:55.179 "superblock": false, 00:10:55.179 "num_base_bdevs": 3, 00:10:55.179 "num_base_bdevs_discovered": 2, 00:10:55.179 "num_base_bdevs_operational": 3, 00:10:55.179 "base_bdevs_list": [ 00:10:55.179 { 00:10:55.179 "name": "BaseBdev1", 00:10:55.179 "uuid": "a8361dd6-7c91-492a-8439-14df4ba4045a", 00:10:55.179 "is_configured": true, 00:10:55.179 "data_offset": 0, 00:10:55.179 "data_size": 65536 00:10:55.179 }, 00:10:55.179 { 00:10:55.179 "name": "BaseBdev2", 00:10:55.179 "uuid": "a5beb26b-5c7a-4fc3-ad11-29ba83908a06", 00:10:55.179 "is_configured": true, 00:10:55.179 "data_offset": 0, 00:10:55.179 "data_size": 65536 00:10:55.179 }, 00:10:55.179 { 00:10:55.179 "name": "BaseBdev3", 00:10:55.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.179 "is_configured": false, 00:10:55.179 "data_offset": 0, 00:10:55.179 "data_size": 0 00:10:55.179 } 00:10:55.179 ] 00:10:55.179 }' 00:10:55.179 04:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.179 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.438 04:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:55.438 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.438 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.438 [2024-11-27 04:27:51.988656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.438 [2024-11-27 04:27:51.988817] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:55.438 [2024-11-27 04:27:51.988853] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:55.438 [2024-11-27 04:27:51.989232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:55.438 [2024-11-27 04:27:51.989492] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:55.438 [2024-11-27 04:27:51.989543] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:55.438 [2024-11-27 04:27:51.989928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.438 BaseBdev3 00:10:55.438 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.438 04:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:55.438 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:55.438 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:55.438 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:55.438 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:55.438 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:55.438 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:55.438 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.438 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.438 04:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.438 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:55.438 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.438 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.438 [ 00:10:55.438 { 00:10:55.438 "name": "BaseBdev3", 00:10:55.438 "aliases": [ 00:10:55.438 "c0726b6f-5de5-48ee-ad7d-ef3c299f6069" 00:10:55.438 ], 00:10:55.438 "product_name": "Malloc disk", 00:10:55.438 "block_size": 512, 00:10:55.438 "num_blocks": 65536, 00:10:55.438 "uuid": "c0726b6f-5de5-48ee-ad7d-ef3c299f6069", 00:10:55.438 "assigned_rate_limits": { 00:10:55.438 "rw_ios_per_sec": 0, 00:10:55.438 "rw_mbytes_per_sec": 0, 00:10:55.438 "r_mbytes_per_sec": 0, 00:10:55.438 "w_mbytes_per_sec": 0 00:10:55.438 }, 00:10:55.438 "claimed": true, 00:10:55.438 "claim_type": "exclusive_write", 00:10:55.438 "zoned": false, 00:10:55.438 "supported_io_types": { 00:10:55.438 "read": true, 00:10:55.438 "write": true, 00:10:55.438 "unmap": true, 00:10:55.438 "flush": true, 00:10:55.438 "reset": true, 00:10:55.438 "nvme_admin": false, 00:10:55.438 "nvme_io": false, 00:10:55.438 "nvme_io_md": false, 00:10:55.439 "write_zeroes": true, 00:10:55.439 "zcopy": true, 00:10:55.439 "get_zone_info": false, 00:10:55.439 "zone_management": false, 00:10:55.439 "zone_append": false, 00:10:55.439 "compare": false, 00:10:55.439 "compare_and_write": false, 00:10:55.439 "abort": true, 00:10:55.439 "seek_hole": false, 00:10:55.439 "seek_data": false, 00:10:55.439 "copy": true, 00:10:55.439 "nvme_iov_md": false 00:10:55.439 }, 00:10:55.439 "memory_domains": [ 00:10:55.439 { 00:10:55.439 "dma_device_id": "system", 00:10:55.439 "dma_device_type": 1 00:10:55.439 }, 00:10:55.439 { 00:10:55.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.439 "dma_device_type": 2 00:10:55.439 } 00:10:55.439 ], 00:10:55.439 "driver_specific": {} 00:10:55.439 } 00:10:55.439 ] 00:10:55.439 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.439 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:55.439 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:55.439 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.439 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:55.439 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.439 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.439 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.439 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.439 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.439 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.439 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.439 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.439 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.439 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.439 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.439 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.439 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.698 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.698 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.698 "name": "Existed_Raid", 00:10:55.698 "uuid": "5e69a2b9-ead9-44bb-8b9a-6ab2d99cd3f7", 00:10:55.698 "strip_size_kb": 64, 00:10:55.698 "state": "online", 00:10:55.698 "raid_level": "concat", 00:10:55.698 "superblock": false, 00:10:55.698 "num_base_bdevs": 3, 00:10:55.698 "num_base_bdevs_discovered": 3, 00:10:55.698 "num_base_bdevs_operational": 3, 00:10:55.698 "base_bdevs_list": [ 00:10:55.698 { 00:10:55.698 "name": "BaseBdev1", 00:10:55.698 "uuid": "a8361dd6-7c91-492a-8439-14df4ba4045a", 00:10:55.698 "is_configured": true, 00:10:55.698 "data_offset": 0, 00:10:55.698 "data_size": 65536 00:10:55.698 }, 00:10:55.698 { 00:10:55.698 "name": "BaseBdev2", 00:10:55.698 "uuid": "a5beb26b-5c7a-4fc3-ad11-29ba83908a06", 00:10:55.698 "is_configured": true, 00:10:55.698 "data_offset": 0, 00:10:55.698 "data_size": 65536 00:10:55.698 }, 00:10:55.698 { 00:10:55.698 "name": "BaseBdev3", 00:10:55.698 "uuid": "c0726b6f-5de5-48ee-ad7d-ef3c299f6069", 00:10:55.698 "is_configured": true, 00:10:55.698 "data_offset": 0, 00:10:55.698 "data_size": 65536 00:10:55.698 } 00:10:55.698 ] 00:10:55.698 }' 00:10:55.698 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.698 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.958 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:55.958 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:55.958 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:55.958 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:55.958 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:55.958 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:55.958 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:55.958 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:55.958 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.958 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.958 [2024-11-27 04:27:52.496266] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:55.958 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.958 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:55.958 "name": "Existed_Raid", 00:10:55.958 "aliases": [ 00:10:55.958 "5e69a2b9-ead9-44bb-8b9a-6ab2d99cd3f7" 00:10:55.958 ], 00:10:55.958 "product_name": "Raid Volume", 00:10:55.958 "block_size": 512, 00:10:55.958 "num_blocks": 196608, 00:10:55.958 "uuid": "5e69a2b9-ead9-44bb-8b9a-6ab2d99cd3f7", 00:10:55.958 "assigned_rate_limits": { 00:10:55.958 "rw_ios_per_sec": 0, 00:10:55.958 "rw_mbytes_per_sec": 0, 00:10:55.958 "r_mbytes_per_sec": 0, 00:10:55.958 "w_mbytes_per_sec": 0 00:10:55.958 }, 00:10:55.958 "claimed": false, 00:10:55.958 "zoned": false, 00:10:55.958 "supported_io_types": { 00:10:55.958 "read": true, 00:10:55.958 "write": true, 00:10:55.958 "unmap": true, 00:10:55.958 "flush": true, 00:10:55.958 "reset": true, 00:10:55.958 "nvme_admin": false, 00:10:55.958 "nvme_io": false, 00:10:55.958 "nvme_io_md": false, 00:10:55.958 "write_zeroes": true, 00:10:55.958 "zcopy": false, 00:10:55.958 "get_zone_info": false, 00:10:55.958 "zone_management": false, 00:10:55.958 "zone_append": false, 00:10:55.958 "compare": false, 00:10:55.958 "compare_and_write": false, 00:10:55.958 "abort": false, 00:10:55.958 "seek_hole": false, 00:10:55.958 "seek_data": false, 00:10:55.958 "copy": false, 00:10:55.958 "nvme_iov_md": false 00:10:55.958 }, 00:10:55.958 "memory_domains": [ 00:10:55.958 { 00:10:55.958 "dma_device_id": "system", 00:10:55.958 "dma_device_type": 1 00:10:55.958 }, 00:10:55.958 { 00:10:55.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.958 "dma_device_type": 2 00:10:55.958 }, 00:10:55.958 { 00:10:55.958 "dma_device_id": "system", 00:10:55.958 "dma_device_type": 1 00:10:55.958 }, 00:10:55.958 { 00:10:55.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.958 "dma_device_type": 2 00:10:55.958 }, 00:10:55.958 { 00:10:55.958 "dma_device_id": "system", 00:10:55.958 "dma_device_type": 1 00:10:55.958 }, 00:10:55.958 { 00:10:55.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.958 "dma_device_type": 2 00:10:55.958 } 00:10:55.958 ], 00:10:55.958 "driver_specific": { 00:10:55.958 "raid": { 00:10:55.958 "uuid": "5e69a2b9-ead9-44bb-8b9a-6ab2d99cd3f7", 00:10:55.958 "strip_size_kb": 64, 00:10:55.958 "state": "online", 00:10:55.958 "raid_level": "concat", 00:10:55.958 "superblock": false, 00:10:55.958 "num_base_bdevs": 3, 00:10:55.958 "num_base_bdevs_discovered": 3, 00:10:55.958 "num_base_bdevs_operational": 3, 00:10:55.958 "base_bdevs_list": [ 00:10:55.958 { 00:10:55.958 "name": "BaseBdev1", 00:10:55.958 "uuid": "a8361dd6-7c91-492a-8439-14df4ba4045a", 00:10:55.958 "is_configured": true, 00:10:55.958 "data_offset": 0, 00:10:55.958 "data_size": 65536 00:10:55.958 }, 00:10:55.958 { 00:10:55.958 "name": "BaseBdev2", 00:10:55.958 "uuid": "a5beb26b-5c7a-4fc3-ad11-29ba83908a06", 00:10:55.958 "is_configured": true, 00:10:55.958 "data_offset": 0, 00:10:55.958 "data_size": 65536 00:10:55.958 }, 00:10:55.958 { 00:10:55.958 "name": "BaseBdev3", 00:10:55.958 "uuid": "c0726b6f-5de5-48ee-ad7d-ef3c299f6069", 00:10:55.958 "is_configured": true, 00:10:55.958 "data_offset": 0, 00:10:55.958 "data_size": 65536 00:10:55.958 } 00:10:55.958 ] 00:10:55.958 } 00:10:55.958 } 00:10:55.958 }' 00:10:55.958 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:56.219 BaseBdev2 00:10:56.219 BaseBdev3' 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.219 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.219 [2024-11-27 04:27:52.759593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:56.219 [2024-11-27 04:27:52.759638] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.219 [2024-11-27 04:27:52.759699] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.478 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.478 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:56.478 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:56.478 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:56.478 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:56.478 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:56.478 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:56.478 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.478 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:56.478 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.478 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.478 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:56.478 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.478 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.478 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.478 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.478 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.478 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.478 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.478 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.478 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.478 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.478 "name": "Existed_Raid", 00:10:56.478 "uuid": "5e69a2b9-ead9-44bb-8b9a-6ab2d99cd3f7", 00:10:56.478 "strip_size_kb": 64, 00:10:56.478 "state": "offline", 00:10:56.478 "raid_level": "concat", 00:10:56.478 "superblock": false, 00:10:56.478 "num_base_bdevs": 3, 00:10:56.478 "num_base_bdevs_discovered": 2, 00:10:56.478 "num_base_bdevs_operational": 2, 00:10:56.478 "base_bdevs_list": [ 00:10:56.478 { 00:10:56.478 "name": null, 00:10:56.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.478 "is_configured": false, 00:10:56.478 "data_offset": 0, 00:10:56.478 "data_size": 65536 00:10:56.478 }, 00:10:56.478 { 00:10:56.478 "name": "BaseBdev2", 00:10:56.478 "uuid": "a5beb26b-5c7a-4fc3-ad11-29ba83908a06", 00:10:56.478 "is_configured": true, 00:10:56.478 "data_offset": 0, 00:10:56.478 "data_size": 65536 00:10:56.478 }, 00:10:56.478 { 00:10:56.478 "name": "BaseBdev3", 00:10:56.478 "uuid": "c0726b6f-5de5-48ee-ad7d-ef3c299f6069", 00:10:56.478 "is_configured": true, 00:10:56.478 "data_offset": 0, 00:10:56.478 "data_size": 65536 00:10:56.478 } 00:10:56.478 ] 00:10:56.478 }' 00:10:56.478 04:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.478 04:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.738 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:56.738 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.738 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:56.738 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.738 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.738 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.997 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.997 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:56.997 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:56.998 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:56.998 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.998 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.998 [2024-11-27 04:27:53.343854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:56.998 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.998 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:56.998 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.998 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.998 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.998 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:56.998 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.998 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.998 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:56.998 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:56.998 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:56.998 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.998 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.998 [2024-11-27 04:27:53.526408] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:56.998 [2024-11-27 04:27:53.526603] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.257 BaseBdev2 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.257 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.257 [ 00:10:57.257 { 00:10:57.257 "name": "BaseBdev2", 00:10:57.257 "aliases": [ 00:10:57.257 "91f5839f-83a7-4d4f-8e81-85e2481c8147" 00:10:57.257 ], 00:10:57.257 "product_name": "Malloc disk", 00:10:57.257 "block_size": 512, 00:10:57.257 "num_blocks": 65536, 00:10:57.258 "uuid": "91f5839f-83a7-4d4f-8e81-85e2481c8147", 00:10:57.258 "assigned_rate_limits": { 00:10:57.258 "rw_ios_per_sec": 0, 00:10:57.258 "rw_mbytes_per_sec": 0, 00:10:57.258 "r_mbytes_per_sec": 0, 00:10:57.258 "w_mbytes_per_sec": 0 00:10:57.258 }, 00:10:57.258 "claimed": false, 00:10:57.258 "zoned": false, 00:10:57.258 "supported_io_types": { 00:10:57.258 "read": true, 00:10:57.258 "write": true, 00:10:57.258 "unmap": true, 00:10:57.258 "flush": true, 00:10:57.258 "reset": true, 00:10:57.258 "nvme_admin": false, 00:10:57.258 "nvme_io": false, 00:10:57.258 "nvme_io_md": false, 00:10:57.258 "write_zeroes": true, 00:10:57.258 "zcopy": true, 00:10:57.258 "get_zone_info": false, 00:10:57.258 "zone_management": false, 00:10:57.258 "zone_append": false, 00:10:57.258 "compare": false, 00:10:57.258 "compare_and_write": false, 00:10:57.258 "abort": true, 00:10:57.258 "seek_hole": false, 00:10:57.258 "seek_data": false, 00:10:57.258 "copy": true, 00:10:57.258 "nvme_iov_md": false 00:10:57.258 }, 00:10:57.258 "memory_domains": [ 00:10:57.258 { 00:10:57.258 "dma_device_id": "system", 00:10:57.258 "dma_device_type": 1 00:10:57.258 }, 00:10:57.258 { 00:10:57.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.258 "dma_device_type": 2 00:10:57.258 } 00:10:57.258 ], 00:10:57.258 "driver_specific": {} 00:10:57.258 } 00:10:57.258 ] 00:10:57.258 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.258 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:57.258 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:57.258 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.258 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:57.258 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.258 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.258 BaseBdev3 00:10:57.258 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.258 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:57.258 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:57.258 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.258 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:57.258 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.258 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.258 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.258 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.258 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.517 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.517 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:57.517 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.517 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.517 [ 00:10:57.517 { 00:10:57.517 "name": "BaseBdev3", 00:10:57.517 "aliases": [ 00:10:57.517 "c0c6d77e-209e-4d27-b331-6536f458a9bd" 00:10:57.517 ], 00:10:57.517 "product_name": "Malloc disk", 00:10:57.517 "block_size": 512, 00:10:57.517 "num_blocks": 65536, 00:10:57.517 "uuid": "c0c6d77e-209e-4d27-b331-6536f458a9bd", 00:10:57.517 "assigned_rate_limits": { 00:10:57.518 "rw_ios_per_sec": 0, 00:10:57.518 "rw_mbytes_per_sec": 0, 00:10:57.518 "r_mbytes_per_sec": 0, 00:10:57.518 "w_mbytes_per_sec": 0 00:10:57.518 }, 00:10:57.518 "claimed": false, 00:10:57.518 "zoned": false, 00:10:57.518 "supported_io_types": { 00:10:57.518 "read": true, 00:10:57.518 "write": true, 00:10:57.518 "unmap": true, 00:10:57.518 "flush": true, 00:10:57.518 "reset": true, 00:10:57.518 "nvme_admin": false, 00:10:57.518 "nvme_io": false, 00:10:57.518 "nvme_io_md": false, 00:10:57.518 "write_zeroes": true, 00:10:57.518 "zcopy": true, 00:10:57.518 "get_zone_info": false, 00:10:57.518 "zone_management": false, 00:10:57.518 "zone_append": false, 00:10:57.518 "compare": false, 00:10:57.518 "compare_and_write": false, 00:10:57.518 "abort": true, 00:10:57.518 "seek_hole": false, 00:10:57.518 "seek_data": false, 00:10:57.518 "copy": true, 00:10:57.518 "nvme_iov_md": false 00:10:57.518 }, 00:10:57.518 "memory_domains": [ 00:10:57.518 { 00:10:57.518 "dma_device_id": "system", 00:10:57.518 "dma_device_type": 1 00:10:57.518 }, 00:10:57.518 { 00:10:57.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.518 "dma_device_type": 2 00:10:57.518 } 00:10:57.518 ], 00:10:57.518 "driver_specific": {} 00:10:57.518 } 00:10:57.518 ] 00:10:57.518 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.518 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:57.518 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:57.518 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.518 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:57.518 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.518 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.518 [2024-11-27 04:27:53.866389] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:57.518 [2024-11-27 04:27:53.866564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:57.518 [2024-11-27 04:27:53.866637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:57.518 [2024-11-27 04:27:53.869155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:57.518 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.518 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:57.518 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.518 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.518 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.518 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.518 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:57.518 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.518 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.518 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.518 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.518 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.518 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.518 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.518 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.518 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.518 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.518 "name": "Existed_Raid", 00:10:57.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.518 "strip_size_kb": 64, 00:10:57.518 "state": "configuring", 00:10:57.518 "raid_level": "concat", 00:10:57.518 "superblock": false, 00:10:57.518 "num_base_bdevs": 3, 00:10:57.518 "num_base_bdevs_discovered": 2, 00:10:57.518 "num_base_bdevs_operational": 3, 00:10:57.518 "base_bdevs_list": [ 00:10:57.518 { 00:10:57.518 "name": "BaseBdev1", 00:10:57.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.518 "is_configured": false, 00:10:57.518 "data_offset": 0, 00:10:57.518 "data_size": 0 00:10:57.518 }, 00:10:57.518 { 00:10:57.518 "name": "BaseBdev2", 00:10:57.518 "uuid": "91f5839f-83a7-4d4f-8e81-85e2481c8147", 00:10:57.518 "is_configured": true, 00:10:57.518 "data_offset": 0, 00:10:57.518 "data_size": 65536 00:10:57.518 }, 00:10:57.518 { 00:10:57.518 "name": "BaseBdev3", 00:10:57.518 "uuid": "c0c6d77e-209e-4d27-b331-6536f458a9bd", 00:10:57.518 "is_configured": true, 00:10:57.518 "data_offset": 0, 00:10:57.518 "data_size": 65536 00:10:57.518 } 00:10:57.518 ] 00:10:57.518 }' 00:10:57.518 04:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.518 04:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.777 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:57.777 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.777 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.777 [2024-11-27 04:27:54.345592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:57.777 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.777 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:57.777 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.777 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.777 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.777 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.777 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:57.777 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.777 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.777 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.777 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.777 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.777 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.777 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.777 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.036 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.036 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.036 "name": "Existed_Raid", 00:10:58.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.036 "strip_size_kb": 64, 00:10:58.036 "state": "configuring", 00:10:58.036 "raid_level": "concat", 00:10:58.036 "superblock": false, 00:10:58.036 "num_base_bdevs": 3, 00:10:58.036 "num_base_bdevs_discovered": 1, 00:10:58.036 "num_base_bdevs_operational": 3, 00:10:58.036 "base_bdevs_list": [ 00:10:58.036 { 00:10:58.036 "name": "BaseBdev1", 00:10:58.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.036 "is_configured": false, 00:10:58.036 "data_offset": 0, 00:10:58.036 "data_size": 0 00:10:58.036 }, 00:10:58.036 { 00:10:58.036 "name": null, 00:10:58.036 "uuid": "91f5839f-83a7-4d4f-8e81-85e2481c8147", 00:10:58.036 "is_configured": false, 00:10:58.036 "data_offset": 0, 00:10:58.036 "data_size": 65536 00:10:58.036 }, 00:10:58.036 { 00:10:58.036 "name": "BaseBdev3", 00:10:58.036 "uuid": "c0c6d77e-209e-4d27-b331-6536f458a9bd", 00:10:58.036 "is_configured": true, 00:10:58.036 "data_offset": 0, 00:10:58.036 "data_size": 65536 00:10:58.036 } 00:10:58.036 ] 00:10:58.036 }' 00:10:58.036 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.036 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.294 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:58.295 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.295 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.295 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.295 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.554 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:58.554 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:58.554 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.554 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.554 [2024-11-27 04:27:54.933912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.554 BaseBdev1 00:10:58.554 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.554 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:58.554 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:58.554 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.555 [ 00:10:58.555 { 00:10:58.555 "name": "BaseBdev1", 00:10:58.555 "aliases": [ 00:10:58.555 "2cce302f-fd37-4e4d-bf29-3d85f77a876d" 00:10:58.555 ], 00:10:58.555 "product_name": "Malloc disk", 00:10:58.555 "block_size": 512, 00:10:58.555 "num_blocks": 65536, 00:10:58.555 "uuid": "2cce302f-fd37-4e4d-bf29-3d85f77a876d", 00:10:58.555 "assigned_rate_limits": { 00:10:58.555 "rw_ios_per_sec": 0, 00:10:58.555 "rw_mbytes_per_sec": 0, 00:10:58.555 "r_mbytes_per_sec": 0, 00:10:58.555 "w_mbytes_per_sec": 0 00:10:58.555 }, 00:10:58.555 "claimed": true, 00:10:58.555 "claim_type": "exclusive_write", 00:10:58.555 "zoned": false, 00:10:58.555 "supported_io_types": { 00:10:58.555 "read": true, 00:10:58.555 "write": true, 00:10:58.555 "unmap": true, 00:10:58.555 "flush": true, 00:10:58.555 "reset": true, 00:10:58.555 "nvme_admin": false, 00:10:58.555 "nvme_io": false, 00:10:58.555 "nvme_io_md": false, 00:10:58.555 "write_zeroes": true, 00:10:58.555 "zcopy": true, 00:10:58.555 "get_zone_info": false, 00:10:58.555 "zone_management": false, 00:10:58.555 "zone_append": false, 00:10:58.555 "compare": false, 00:10:58.555 "compare_and_write": false, 00:10:58.555 "abort": true, 00:10:58.555 "seek_hole": false, 00:10:58.555 "seek_data": false, 00:10:58.555 "copy": true, 00:10:58.555 "nvme_iov_md": false 00:10:58.555 }, 00:10:58.555 "memory_domains": [ 00:10:58.555 { 00:10:58.555 "dma_device_id": "system", 00:10:58.555 "dma_device_type": 1 00:10:58.555 }, 00:10:58.555 { 00:10:58.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.555 "dma_device_type": 2 00:10:58.555 } 00:10:58.555 ], 00:10:58.555 "driver_specific": {} 00:10:58.555 } 00:10:58.555 ] 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.555 04:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.555 04:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.555 "name": "Existed_Raid", 00:10:58.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.555 "strip_size_kb": 64, 00:10:58.555 "state": "configuring", 00:10:58.555 "raid_level": "concat", 00:10:58.555 "superblock": false, 00:10:58.555 "num_base_bdevs": 3, 00:10:58.555 "num_base_bdevs_discovered": 2, 00:10:58.555 "num_base_bdevs_operational": 3, 00:10:58.555 "base_bdevs_list": [ 00:10:58.555 { 00:10:58.555 "name": "BaseBdev1", 00:10:58.555 "uuid": "2cce302f-fd37-4e4d-bf29-3d85f77a876d", 00:10:58.555 "is_configured": true, 00:10:58.555 "data_offset": 0, 00:10:58.555 "data_size": 65536 00:10:58.555 }, 00:10:58.555 { 00:10:58.555 "name": null, 00:10:58.555 "uuid": "91f5839f-83a7-4d4f-8e81-85e2481c8147", 00:10:58.555 "is_configured": false, 00:10:58.555 "data_offset": 0, 00:10:58.555 "data_size": 65536 00:10:58.555 }, 00:10:58.555 { 00:10:58.555 "name": "BaseBdev3", 00:10:58.555 "uuid": "c0c6d77e-209e-4d27-b331-6536f458a9bd", 00:10:58.555 "is_configured": true, 00:10:58.555 "data_offset": 0, 00:10:58.555 "data_size": 65536 00:10:58.555 } 00:10:58.555 ] 00:10:58.555 }' 00:10:58.555 04:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.555 04:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.162 [2024-11-27 04:27:55.513078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.162 "name": "Existed_Raid", 00:10:59.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.162 "strip_size_kb": 64, 00:10:59.162 "state": "configuring", 00:10:59.162 "raid_level": "concat", 00:10:59.162 "superblock": false, 00:10:59.162 "num_base_bdevs": 3, 00:10:59.162 "num_base_bdevs_discovered": 1, 00:10:59.162 "num_base_bdevs_operational": 3, 00:10:59.162 "base_bdevs_list": [ 00:10:59.162 { 00:10:59.162 "name": "BaseBdev1", 00:10:59.162 "uuid": "2cce302f-fd37-4e4d-bf29-3d85f77a876d", 00:10:59.162 "is_configured": true, 00:10:59.162 "data_offset": 0, 00:10:59.162 "data_size": 65536 00:10:59.162 }, 00:10:59.162 { 00:10:59.162 "name": null, 00:10:59.162 "uuid": "91f5839f-83a7-4d4f-8e81-85e2481c8147", 00:10:59.162 "is_configured": false, 00:10:59.162 "data_offset": 0, 00:10:59.162 "data_size": 65536 00:10:59.162 }, 00:10:59.162 { 00:10:59.162 "name": null, 00:10:59.162 "uuid": "c0c6d77e-209e-4d27-b331-6536f458a9bd", 00:10:59.162 "is_configured": false, 00:10:59.162 "data_offset": 0, 00:10:59.162 "data_size": 65536 00:10:59.162 } 00:10:59.162 ] 00:10:59.162 }' 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.162 04:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.731 [2024-11-27 04:27:56.060365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.731 "name": "Existed_Raid", 00:10:59.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.731 "strip_size_kb": 64, 00:10:59.731 "state": "configuring", 00:10:59.731 "raid_level": "concat", 00:10:59.731 "superblock": false, 00:10:59.731 "num_base_bdevs": 3, 00:10:59.731 "num_base_bdevs_discovered": 2, 00:10:59.731 "num_base_bdevs_operational": 3, 00:10:59.731 "base_bdevs_list": [ 00:10:59.731 { 00:10:59.731 "name": "BaseBdev1", 00:10:59.731 "uuid": "2cce302f-fd37-4e4d-bf29-3d85f77a876d", 00:10:59.731 "is_configured": true, 00:10:59.731 "data_offset": 0, 00:10:59.731 "data_size": 65536 00:10:59.731 }, 00:10:59.731 { 00:10:59.731 "name": null, 00:10:59.731 "uuid": "91f5839f-83a7-4d4f-8e81-85e2481c8147", 00:10:59.731 "is_configured": false, 00:10:59.731 "data_offset": 0, 00:10:59.731 "data_size": 65536 00:10:59.731 }, 00:10:59.731 { 00:10:59.731 "name": "BaseBdev3", 00:10:59.731 "uuid": "c0c6d77e-209e-4d27-b331-6536f458a9bd", 00:10:59.731 "is_configured": true, 00:10:59.731 "data_offset": 0, 00:10:59.731 "data_size": 65536 00:10:59.731 } 00:10:59.731 ] 00:10:59.731 }' 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.731 04:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.991 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.991 04:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.991 04:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.991 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:59.991 04:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.251 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:00.251 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:00.251 04:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.251 04:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.251 [2024-11-27 04:27:56.592368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:00.251 04:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.251 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:00.251 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.251 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.251 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.251 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.251 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.251 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.251 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.251 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.251 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.251 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.251 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.251 04:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.251 04:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.251 04:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.251 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.251 "name": "Existed_Raid", 00:11:00.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.251 "strip_size_kb": 64, 00:11:00.251 "state": "configuring", 00:11:00.251 "raid_level": "concat", 00:11:00.251 "superblock": false, 00:11:00.251 "num_base_bdevs": 3, 00:11:00.251 "num_base_bdevs_discovered": 1, 00:11:00.251 "num_base_bdevs_operational": 3, 00:11:00.251 "base_bdevs_list": [ 00:11:00.251 { 00:11:00.251 "name": null, 00:11:00.251 "uuid": "2cce302f-fd37-4e4d-bf29-3d85f77a876d", 00:11:00.251 "is_configured": false, 00:11:00.251 "data_offset": 0, 00:11:00.251 "data_size": 65536 00:11:00.251 }, 00:11:00.251 { 00:11:00.251 "name": null, 00:11:00.251 "uuid": "91f5839f-83a7-4d4f-8e81-85e2481c8147", 00:11:00.251 "is_configured": false, 00:11:00.251 "data_offset": 0, 00:11:00.251 "data_size": 65536 00:11:00.251 }, 00:11:00.251 { 00:11:00.251 "name": "BaseBdev3", 00:11:00.251 "uuid": "c0c6d77e-209e-4d27-b331-6536f458a9bd", 00:11:00.251 "is_configured": true, 00:11:00.251 "data_offset": 0, 00:11:00.251 "data_size": 65536 00:11:00.251 } 00:11:00.251 ] 00:11:00.251 }' 00:11:00.251 04:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.251 04:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.817 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.817 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:00.817 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.817 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.817 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.817 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:00.817 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:00.817 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.817 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.817 [2024-11-27 04:27:57.253786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:00.817 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.817 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:00.817 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.817 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.817 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.817 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.817 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.818 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.818 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.818 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.818 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.818 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.818 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.818 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.818 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.818 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.818 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.818 "name": "Existed_Raid", 00:11:00.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.818 "strip_size_kb": 64, 00:11:00.818 "state": "configuring", 00:11:00.818 "raid_level": "concat", 00:11:00.818 "superblock": false, 00:11:00.818 "num_base_bdevs": 3, 00:11:00.818 "num_base_bdevs_discovered": 2, 00:11:00.818 "num_base_bdevs_operational": 3, 00:11:00.818 "base_bdevs_list": [ 00:11:00.818 { 00:11:00.818 "name": null, 00:11:00.818 "uuid": "2cce302f-fd37-4e4d-bf29-3d85f77a876d", 00:11:00.818 "is_configured": false, 00:11:00.818 "data_offset": 0, 00:11:00.818 "data_size": 65536 00:11:00.818 }, 00:11:00.818 { 00:11:00.818 "name": "BaseBdev2", 00:11:00.818 "uuid": "91f5839f-83a7-4d4f-8e81-85e2481c8147", 00:11:00.818 "is_configured": true, 00:11:00.818 "data_offset": 0, 00:11:00.818 "data_size": 65536 00:11:00.818 }, 00:11:00.818 { 00:11:00.818 "name": "BaseBdev3", 00:11:00.818 "uuid": "c0c6d77e-209e-4d27-b331-6536f458a9bd", 00:11:00.818 "is_configured": true, 00:11:00.818 "data_offset": 0, 00:11:00.818 "data_size": 65536 00:11:00.818 } 00:11:00.818 ] 00:11:00.818 }' 00:11:00.818 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.818 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.387 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.387 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:01.387 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2cce302f-fd37-4e4d-bf29-3d85f77a876d 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.388 [2024-11-27 04:27:57.910782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:01.388 [2024-11-27 04:27:57.910995] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:01.388 [2024-11-27 04:27:57.911031] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:01.388 [2024-11-27 04:27:57.911448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:01.388 [2024-11-27 04:27:57.911723] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:01.388 [2024-11-27 04:27:57.911774] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:01.388 [2024-11-27 04:27:57.912240] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.388 NewBaseBdev 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.388 [ 00:11:01.388 { 00:11:01.388 "name": "NewBaseBdev", 00:11:01.388 "aliases": [ 00:11:01.388 "2cce302f-fd37-4e4d-bf29-3d85f77a876d" 00:11:01.388 ], 00:11:01.388 "product_name": "Malloc disk", 00:11:01.388 "block_size": 512, 00:11:01.388 "num_blocks": 65536, 00:11:01.388 "uuid": "2cce302f-fd37-4e4d-bf29-3d85f77a876d", 00:11:01.388 "assigned_rate_limits": { 00:11:01.388 "rw_ios_per_sec": 0, 00:11:01.388 "rw_mbytes_per_sec": 0, 00:11:01.388 "r_mbytes_per_sec": 0, 00:11:01.388 "w_mbytes_per_sec": 0 00:11:01.388 }, 00:11:01.388 "claimed": true, 00:11:01.388 "claim_type": "exclusive_write", 00:11:01.388 "zoned": false, 00:11:01.388 "supported_io_types": { 00:11:01.388 "read": true, 00:11:01.388 "write": true, 00:11:01.388 "unmap": true, 00:11:01.388 "flush": true, 00:11:01.388 "reset": true, 00:11:01.388 "nvme_admin": false, 00:11:01.388 "nvme_io": false, 00:11:01.388 "nvme_io_md": false, 00:11:01.388 "write_zeroes": true, 00:11:01.388 "zcopy": true, 00:11:01.388 "get_zone_info": false, 00:11:01.388 "zone_management": false, 00:11:01.388 "zone_append": false, 00:11:01.388 "compare": false, 00:11:01.388 "compare_and_write": false, 00:11:01.388 "abort": true, 00:11:01.388 "seek_hole": false, 00:11:01.388 "seek_data": false, 00:11:01.388 "copy": true, 00:11:01.388 "nvme_iov_md": false 00:11:01.388 }, 00:11:01.388 "memory_domains": [ 00:11:01.388 { 00:11:01.388 "dma_device_id": "system", 00:11:01.388 "dma_device_type": 1 00:11:01.388 }, 00:11:01.388 { 00:11:01.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.388 "dma_device_type": 2 00:11:01.388 } 00:11:01.388 ], 00:11:01.388 "driver_specific": {} 00:11:01.388 } 00:11:01.388 ] 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.388 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.648 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.648 "name": "Existed_Raid", 00:11:01.648 "uuid": "811865ba-433c-472a-b4a8-7867eb2ab3a5", 00:11:01.648 "strip_size_kb": 64, 00:11:01.648 "state": "online", 00:11:01.648 "raid_level": "concat", 00:11:01.648 "superblock": false, 00:11:01.648 "num_base_bdevs": 3, 00:11:01.648 "num_base_bdevs_discovered": 3, 00:11:01.648 "num_base_bdevs_operational": 3, 00:11:01.648 "base_bdevs_list": [ 00:11:01.648 { 00:11:01.648 "name": "NewBaseBdev", 00:11:01.648 "uuid": "2cce302f-fd37-4e4d-bf29-3d85f77a876d", 00:11:01.648 "is_configured": true, 00:11:01.648 "data_offset": 0, 00:11:01.648 "data_size": 65536 00:11:01.648 }, 00:11:01.648 { 00:11:01.648 "name": "BaseBdev2", 00:11:01.648 "uuid": "91f5839f-83a7-4d4f-8e81-85e2481c8147", 00:11:01.648 "is_configured": true, 00:11:01.648 "data_offset": 0, 00:11:01.648 "data_size": 65536 00:11:01.648 }, 00:11:01.648 { 00:11:01.648 "name": "BaseBdev3", 00:11:01.648 "uuid": "c0c6d77e-209e-4d27-b331-6536f458a9bd", 00:11:01.648 "is_configured": true, 00:11:01.648 "data_offset": 0, 00:11:01.648 "data_size": 65536 00:11:01.648 } 00:11:01.648 ] 00:11:01.648 }' 00:11:01.648 04:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.648 04:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.908 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:01.908 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:01.908 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:01.908 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:01.908 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:01.908 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:01.908 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:01.908 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:01.908 04:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.908 04:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.908 [2024-11-27 04:27:58.426368] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.908 04:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.908 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:01.908 "name": "Existed_Raid", 00:11:01.908 "aliases": [ 00:11:01.908 "811865ba-433c-472a-b4a8-7867eb2ab3a5" 00:11:01.908 ], 00:11:01.908 "product_name": "Raid Volume", 00:11:01.908 "block_size": 512, 00:11:01.908 "num_blocks": 196608, 00:11:01.908 "uuid": "811865ba-433c-472a-b4a8-7867eb2ab3a5", 00:11:01.908 "assigned_rate_limits": { 00:11:01.908 "rw_ios_per_sec": 0, 00:11:01.908 "rw_mbytes_per_sec": 0, 00:11:01.908 "r_mbytes_per_sec": 0, 00:11:01.908 "w_mbytes_per_sec": 0 00:11:01.908 }, 00:11:01.908 "claimed": false, 00:11:01.908 "zoned": false, 00:11:01.908 "supported_io_types": { 00:11:01.908 "read": true, 00:11:01.908 "write": true, 00:11:01.908 "unmap": true, 00:11:01.908 "flush": true, 00:11:01.908 "reset": true, 00:11:01.908 "nvme_admin": false, 00:11:01.908 "nvme_io": false, 00:11:01.908 "nvme_io_md": false, 00:11:01.908 "write_zeroes": true, 00:11:01.908 "zcopy": false, 00:11:01.908 "get_zone_info": false, 00:11:01.908 "zone_management": false, 00:11:01.908 "zone_append": false, 00:11:01.908 "compare": false, 00:11:01.908 "compare_and_write": false, 00:11:01.908 "abort": false, 00:11:01.908 "seek_hole": false, 00:11:01.908 "seek_data": false, 00:11:01.908 "copy": false, 00:11:01.908 "nvme_iov_md": false 00:11:01.908 }, 00:11:01.908 "memory_domains": [ 00:11:01.908 { 00:11:01.908 "dma_device_id": "system", 00:11:01.908 "dma_device_type": 1 00:11:01.908 }, 00:11:01.908 { 00:11:01.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.908 "dma_device_type": 2 00:11:01.908 }, 00:11:01.908 { 00:11:01.908 "dma_device_id": "system", 00:11:01.908 "dma_device_type": 1 00:11:01.908 }, 00:11:01.908 { 00:11:01.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.908 "dma_device_type": 2 00:11:01.908 }, 00:11:01.908 { 00:11:01.908 "dma_device_id": "system", 00:11:01.908 "dma_device_type": 1 00:11:01.908 }, 00:11:01.908 { 00:11:01.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.908 "dma_device_type": 2 00:11:01.908 } 00:11:01.908 ], 00:11:01.908 "driver_specific": { 00:11:01.908 "raid": { 00:11:01.908 "uuid": "811865ba-433c-472a-b4a8-7867eb2ab3a5", 00:11:01.908 "strip_size_kb": 64, 00:11:01.908 "state": "online", 00:11:01.908 "raid_level": "concat", 00:11:01.908 "superblock": false, 00:11:01.908 "num_base_bdevs": 3, 00:11:01.908 "num_base_bdevs_discovered": 3, 00:11:01.908 "num_base_bdevs_operational": 3, 00:11:01.908 "base_bdevs_list": [ 00:11:01.908 { 00:11:01.908 "name": "NewBaseBdev", 00:11:01.908 "uuid": "2cce302f-fd37-4e4d-bf29-3d85f77a876d", 00:11:01.908 "is_configured": true, 00:11:01.908 "data_offset": 0, 00:11:01.908 "data_size": 65536 00:11:01.908 }, 00:11:01.908 { 00:11:01.908 "name": "BaseBdev2", 00:11:01.908 "uuid": "91f5839f-83a7-4d4f-8e81-85e2481c8147", 00:11:01.908 "is_configured": true, 00:11:01.908 "data_offset": 0, 00:11:01.908 "data_size": 65536 00:11:01.908 }, 00:11:01.908 { 00:11:01.908 "name": "BaseBdev3", 00:11:01.908 "uuid": "c0c6d77e-209e-4d27-b331-6536f458a9bd", 00:11:01.908 "is_configured": true, 00:11:01.908 "data_offset": 0, 00:11:01.908 "data_size": 65536 00:11:01.908 } 00:11:01.908 ] 00:11:01.908 } 00:11:01.908 } 00:11:01.908 }' 00:11:01.908 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:02.168 BaseBdev2 00:11:02.168 BaseBdev3' 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.168 [2024-11-27 04:27:58.697518] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:02.168 [2024-11-27 04:27:58.697608] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:02.168 [2024-11-27 04:27:58.697728] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.168 [2024-11-27 04:27:58.697797] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.168 [2024-11-27 04:27:58.697812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65816 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65816 ']' 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65816 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65816 00:11:02.168 killing process with pid 65816 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65816' 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65816 00:11:02.168 [2024-11-27 04:27:58.736852] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:02.168 04:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65816 00:11:02.738 [2024-11-27 04:27:59.074145] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:04.117 ************************************ 00:11:04.117 END TEST raid_state_function_test 00:11:04.117 ************************************ 00:11:04.117 00:11:04.117 real 0m11.413s 00:11:04.117 user 0m18.070s 00:11:04.117 sys 0m1.925s 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.117 04:28:00 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:11:04.117 04:28:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:04.117 04:28:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.117 04:28:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:04.117 ************************************ 00:11:04.117 START TEST raid_state_function_test_sb 00:11:04.117 ************************************ 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:04.117 Process raid pid: 66448 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66448 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66448' 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66448 00:11:04.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66448 ']' 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.117 04:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.117 [2024-11-27 04:28:00.520194] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:04.117 [2024-11-27 04:28:00.520433] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.117 [2024-11-27 04:28:00.701253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.376 [2024-11-27 04:28:00.831673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.636 [2024-11-27 04:28:01.060895] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.636 [2024-11-27 04:28:01.061029] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.896 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.896 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:04.896 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:04.896 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.896 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.896 [2024-11-27 04:28:01.446560] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:04.896 [2024-11-27 04:28:01.446679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:04.896 [2024-11-27 04:28:01.446714] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:04.896 [2024-11-27 04:28:01.446741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:04.896 [2024-11-27 04:28:01.446763] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:04.896 [2024-11-27 04:28:01.446787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:04.896 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.896 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:04.896 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.896 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.896 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.896 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.896 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:04.896 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.896 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.896 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.896 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.896 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.896 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.897 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.897 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.897 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.155 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.156 "name": "Existed_Raid", 00:11:05.156 "uuid": "d042f85e-addd-48a9-8f95-02f3fdd6774b", 00:11:05.156 "strip_size_kb": 64, 00:11:05.156 "state": "configuring", 00:11:05.156 "raid_level": "concat", 00:11:05.156 "superblock": true, 00:11:05.156 "num_base_bdevs": 3, 00:11:05.156 "num_base_bdevs_discovered": 0, 00:11:05.156 "num_base_bdevs_operational": 3, 00:11:05.156 "base_bdevs_list": [ 00:11:05.156 { 00:11:05.156 "name": "BaseBdev1", 00:11:05.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.156 "is_configured": false, 00:11:05.156 "data_offset": 0, 00:11:05.156 "data_size": 0 00:11:05.156 }, 00:11:05.156 { 00:11:05.156 "name": "BaseBdev2", 00:11:05.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.156 "is_configured": false, 00:11:05.156 "data_offset": 0, 00:11:05.156 "data_size": 0 00:11:05.156 }, 00:11:05.156 { 00:11:05.156 "name": "BaseBdev3", 00:11:05.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.156 "is_configured": false, 00:11:05.156 "data_offset": 0, 00:11:05.156 "data_size": 0 00:11:05.156 } 00:11:05.156 ] 00:11:05.156 }' 00:11:05.156 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.156 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.415 [2024-11-27 04:28:01.889975] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:05.415 [2024-11-27 04:28:01.890039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.415 [2024-11-27 04:28:01.901989] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:05.415 [2024-11-27 04:28:01.902076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:05.415 [2024-11-27 04:28:01.902109] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:05.415 [2024-11-27 04:28:01.902123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:05.415 [2024-11-27 04:28:01.902132] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:05.415 [2024-11-27 04:28:01.902143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.415 [2024-11-27 04:28:01.967829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.415 BaseBdev1 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.415 [ 00:11:05.415 { 00:11:05.415 "name": "BaseBdev1", 00:11:05.415 "aliases": [ 00:11:05.415 "e187aaaf-7d7f-4b1b-929b-ee5b3710876c" 00:11:05.415 ], 00:11:05.415 "product_name": "Malloc disk", 00:11:05.415 "block_size": 512, 00:11:05.415 "num_blocks": 65536, 00:11:05.415 "uuid": "e187aaaf-7d7f-4b1b-929b-ee5b3710876c", 00:11:05.415 "assigned_rate_limits": { 00:11:05.415 "rw_ios_per_sec": 0, 00:11:05.415 "rw_mbytes_per_sec": 0, 00:11:05.415 "r_mbytes_per_sec": 0, 00:11:05.415 "w_mbytes_per_sec": 0 00:11:05.415 }, 00:11:05.415 "claimed": true, 00:11:05.415 "claim_type": "exclusive_write", 00:11:05.415 "zoned": false, 00:11:05.415 "supported_io_types": { 00:11:05.415 "read": true, 00:11:05.415 "write": true, 00:11:05.415 "unmap": true, 00:11:05.415 "flush": true, 00:11:05.415 "reset": true, 00:11:05.415 "nvme_admin": false, 00:11:05.415 "nvme_io": false, 00:11:05.415 "nvme_io_md": false, 00:11:05.415 "write_zeroes": true, 00:11:05.415 "zcopy": true, 00:11:05.415 "get_zone_info": false, 00:11:05.415 "zone_management": false, 00:11:05.415 "zone_append": false, 00:11:05.415 "compare": false, 00:11:05.415 "compare_and_write": false, 00:11:05.415 "abort": true, 00:11:05.415 "seek_hole": false, 00:11:05.415 "seek_data": false, 00:11:05.415 "copy": true, 00:11:05.415 "nvme_iov_md": false 00:11:05.415 }, 00:11:05.415 "memory_domains": [ 00:11:05.415 { 00:11:05.415 "dma_device_id": "system", 00:11:05.415 "dma_device_type": 1 00:11:05.415 }, 00:11:05.415 { 00:11:05.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.415 "dma_device_type": 2 00:11:05.415 } 00:11:05.415 ], 00:11:05.415 "driver_specific": {} 00:11:05.415 } 00:11:05.415 ] 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.415 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.674 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.674 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.674 04:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.674 04:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.674 04:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.674 04:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.674 "name": "Existed_Raid", 00:11:05.674 "uuid": "08622560-fd56-422c-be1f-090cf9bed762", 00:11:05.674 "strip_size_kb": 64, 00:11:05.674 "state": "configuring", 00:11:05.675 "raid_level": "concat", 00:11:05.675 "superblock": true, 00:11:05.675 "num_base_bdevs": 3, 00:11:05.675 "num_base_bdevs_discovered": 1, 00:11:05.675 "num_base_bdevs_operational": 3, 00:11:05.675 "base_bdevs_list": [ 00:11:05.675 { 00:11:05.675 "name": "BaseBdev1", 00:11:05.675 "uuid": "e187aaaf-7d7f-4b1b-929b-ee5b3710876c", 00:11:05.675 "is_configured": true, 00:11:05.675 "data_offset": 2048, 00:11:05.675 "data_size": 63488 00:11:05.675 }, 00:11:05.675 { 00:11:05.675 "name": "BaseBdev2", 00:11:05.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.675 "is_configured": false, 00:11:05.675 "data_offset": 0, 00:11:05.675 "data_size": 0 00:11:05.675 }, 00:11:05.675 { 00:11:05.675 "name": "BaseBdev3", 00:11:05.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.675 "is_configured": false, 00:11:05.675 "data_offset": 0, 00:11:05.675 "data_size": 0 00:11:05.675 } 00:11:05.675 ] 00:11:05.675 }' 00:11:05.675 04:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.675 04:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.933 04:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:05.933 04:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.933 04:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.933 [2024-11-27 04:28:02.503824] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:05.933 [2024-11-27 04:28:02.504024] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:05.933 04:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.933 04:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:05.933 04:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.933 04:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.933 [2024-11-27 04:28:02.511917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.933 [2024-11-27 04:28:02.514587] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:05.933 [2024-11-27 04:28:02.514660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:05.933 [2024-11-27 04:28:02.514675] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:05.933 [2024-11-27 04:28:02.514687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:05.933 04:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.933 04:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:05.933 04:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:05.933 04:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:05.933 04:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.933 04:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.933 04:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.933 04:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.933 04:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:05.933 04:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.933 04:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.933 04:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.933 04:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.193 04:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.193 04:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.193 04:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.193 04:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.193 04:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.193 04:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.193 "name": "Existed_Raid", 00:11:06.193 "uuid": "56af7a8b-b4f0-4128-8f11-0e21dfc00f96", 00:11:06.193 "strip_size_kb": 64, 00:11:06.193 "state": "configuring", 00:11:06.193 "raid_level": "concat", 00:11:06.193 "superblock": true, 00:11:06.193 "num_base_bdevs": 3, 00:11:06.193 "num_base_bdevs_discovered": 1, 00:11:06.193 "num_base_bdevs_operational": 3, 00:11:06.193 "base_bdevs_list": [ 00:11:06.193 { 00:11:06.193 "name": "BaseBdev1", 00:11:06.193 "uuid": "e187aaaf-7d7f-4b1b-929b-ee5b3710876c", 00:11:06.193 "is_configured": true, 00:11:06.193 "data_offset": 2048, 00:11:06.193 "data_size": 63488 00:11:06.193 }, 00:11:06.193 { 00:11:06.193 "name": "BaseBdev2", 00:11:06.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.193 "is_configured": false, 00:11:06.193 "data_offset": 0, 00:11:06.193 "data_size": 0 00:11:06.193 }, 00:11:06.193 { 00:11:06.193 "name": "BaseBdev3", 00:11:06.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.193 "is_configured": false, 00:11:06.193 "data_offset": 0, 00:11:06.193 "data_size": 0 00:11:06.193 } 00:11:06.193 ] 00:11:06.193 }' 00:11:06.193 04:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.193 04:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.451 04:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:06.451 04:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.451 04:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.451 [2024-11-27 04:28:03.008244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:06.451 BaseBdev2 00:11:06.451 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.451 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:06.451 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:06.451 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.451 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:06.451 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.451 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.451 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.451 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.451 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.451 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.451 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:06.451 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.451 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.451 [ 00:11:06.451 { 00:11:06.452 "name": "BaseBdev2", 00:11:06.452 "aliases": [ 00:11:06.452 "13b6ca74-afd7-46ac-993a-399a370be780" 00:11:06.452 ], 00:11:06.452 "product_name": "Malloc disk", 00:11:06.452 "block_size": 512, 00:11:06.452 "num_blocks": 65536, 00:11:06.452 "uuid": "13b6ca74-afd7-46ac-993a-399a370be780", 00:11:06.452 "assigned_rate_limits": { 00:11:06.452 "rw_ios_per_sec": 0, 00:11:06.452 "rw_mbytes_per_sec": 0, 00:11:06.452 "r_mbytes_per_sec": 0, 00:11:06.452 "w_mbytes_per_sec": 0 00:11:06.452 }, 00:11:06.452 "claimed": true, 00:11:06.452 "claim_type": "exclusive_write", 00:11:06.452 "zoned": false, 00:11:06.452 "supported_io_types": { 00:11:06.452 "read": true, 00:11:06.452 "write": true, 00:11:06.452 "unmap": true, 00:11:06.452 "flush": true, 00:11:06.452 "reset": true, 00:11:06.452 "nvme_admin": false, 00:11:06.452 "nvme_io": false, 00:11:06.452 "nvme_io_md": false, 00:11:06.452 "write_zeroes": true, 00:11:06.452 "zcopy": true, 00:11:06.452 "get_zone_info": false, 00:11:06.452 "zone_management": false, 00:11:06.452 "zone_append": false, 00:11:06.452 "compare": false, 00:11:06.452 "compare_and_write": false, 00:11:06.452 "abort": true, 00:11:06.452 "seek_hole": false, 00:11:06.452 "seek_data": false, 00:11:06.452 "copy": true, 00:11:06.452 "nvme_iov_md": false 00:11:06.452 }, 00:11:06.452 "memory_domains": [ 00:11:06.452 { 00:11:06.452 "dma_device_id": "system", 00:11:06.452 "dma_device_type": 1 00:11:06.452 }, 00:11:06.452 { 00:11:06.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.452 "dma_device_type": 2 00:11:06.452 } 00:11:06.452 ], 00:11:06.452 "driver_specific": {} 00:11:06.452 } 00:11:06.452 ] 00:11:06.452 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.452 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:06.452 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:06.452 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:06.452 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:06.452 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.452 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.452 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.452 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.452 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:06.452 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.452 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.452 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.711 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.711 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.711 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.711 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.711 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.711 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.711 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.711 "name": "Existed_Raid", 00:11:06.711 "uuid": "56af7a8b-b4f0-4128-8f11-0e21dfc00f96", 00:11:06.711 "strip_size_kb": 64, 00:11:06.711 "state": "configuring", 00:11:06.711 "raid_level": "concat", 00:11:06.711 "superblock": true, 00:11:06.711 "num_base_bdevs": 3, 00:11:06.711 "num_base_bdevs_discovered": 2, 00:11:06.711 "num_base_bdevs_operational": 3, 00:11:06.711 "base_bdevs_list": [ 00:11:06.711 { 00:11:06.711 "name": "BaseBdev1", 00:11:06.711 "uuid": "e187aaaf-7d7f-4b1b-929b-ee5b3710876c", 00:11:06.711 "is_configured": true, 00:11:06.711 "data_offset": 2048, 00:11:06.711 "data_size": 63488 00:11:06.711 }, 00:11:06.711 { 00:11:06.711 "name": "BaseBdev2", 00:11:06.711 "uuid": "13b6ca74-afd7-46ac-993a-399a370be780", 00:11:06.711 "is_configured": true, 00:11:06.711 "data_offset": 2048, 00:11:06.711 "data_size": 63488 00:11:06.711 }, 00:11:06.711 { 00:11:06.711 "name": "BaseBdev3", 00:11:06.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.711 "is_configured": false, 00:11:06.711 "data_offset": 0, 00:11:06.711 "data_size": 0 00:11:06.711 } 00:11:06.711 ] 00:11:06.711 }' 00:11:06.711 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.711 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.970 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:06.970 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.970 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.970 [2024-11-27 04:28:03.537513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:06.970 [2024-11-27 04:28:03.538033] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:06.970 [2024-11-27 04:28:03.538140] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:06.970 [2024-11-27 04:28:03.538592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:06.970 BaseBdev3 00:11:06.970 [2024-11-27 04:28:03.538872] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:06.970 [2024-11-27 04:28:03.538934] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:06.970 [2024-11-27 04:28:03.539255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.970 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.970 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:06.970 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:06.970 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.970 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:06.970 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.970 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.971 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.971 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.971 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.971 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.971 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:06.971 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.971 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.230 [ 00:11:07.230 { 00:11:07.230 "name": "BaseBdev3", 00:11:07.230 "aliases": [ 00:11:07.230 "b2d2b1fd-49f4-4338-a182-a014bfd6a8d2" 00:11:07.230 ], 00:11:07.230 "product_name": "Malloc disk", 00:11:07.230 "block_size": 512, 00:11:07.230 "num_blocks": 65536, 00:11:07.230 "uuid": "b2d2b1fd-49f4-4338-a182-a014bfd6a8d2", 00:11:07.230 "assigned_rate_limits": { 00:11:07.230 "rw_ios_per_sec": 0, 00:11:07.230 "rw_mbytes_per_sec": 0, 00:11:07.230 "r_mbytes_per_sec": 0, 00:11:07.230 "w_mbytes_per_sec": 0 00:11:07.230 }, 00:11:07.230 "claimed": true, 00:11:07.230 "claim_type": "exclusive_write", 00:11:07.230 "zoned": false, 00:11:07.230 "supported_io_types": { 00:11:07.230 "read": true, 00:11:07.230 "write": true, 00:11:07.230 "unmap": true, 00:11:07.230 "flush": true, 00:11:07.230 "reset": true, 00:11:07.230 "nvme_admin": false, 00:11:07.230 "nvme_io": false, 00:11:07.230 "nvme_io_md": false, 00:11:07.230 "write_zeroes": true, 00:11:07.230 "zcopy": true, 00:11:07.230 "get_zone_info": false, 00:11:07.230 "zone_management": false, 00:11:07.230 "zone_append": false, 00:11:07.230 "compare": false, 00:11:07.230 "compare_and_write": false, 00:11:07.230 "abort": true, 00:11:07.230 "seek_hole": false, 00:11:07.230 "seek_data": false, 00:11:07.230 "copy": true, 00:11:07.230 "nvme_iov_md": false 00:11:07.230 }, 00:11:07.230 "memory_domains": [ 00:11:07.230 { 00:11:07.230 "dma_device_id": "system", 00:11:07.230 "dma_device_type": 1 00:11:07.230 }, 00:11:07.230 { 00:11:07.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.230 "dma_device_type": 2 00:11:07.230 } 00:11:07.230 ], 00:11:07.230 "driver_specific": {} 00:11:07.230 } 00:11:07.230 ] 00:11:07.230 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.230 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:07.230 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:07.230 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:07.230 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:07.230 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.230 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.230 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.230 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.230 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:07.230 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.230 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.230 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.230 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.230 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.230 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.230 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.230 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.230 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.230 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.230 "name": "Existed_Raid", 00:11:07.230 "uuid": "56af7a8b-b4f0-4128-8f11-0e21dfc00f96", 00:11:07.230 "strip_size_kb": 64, 00:11:07.230 "state": "online", 00:11:07.230 "raid_level": "concat", 00:11:07.230 "superblock": true, 00:11:07.230 "num_base_bdevs": 3, 00:11:07.230 "num_base_bdevs_discovered": 3, 00:11:07.230 "num_base_bdevs_operational": 3, 00:11:07.230 "base_bdevs_list": [ 00:11:07.230 { 00:11:07.230 "name": "BaseBdev1", 00:11:07.230 "uuid": "e187aaaf-7d7f-4b1b-929b-ee5b3710876c", 00:11:07.230 "is_configured": true, 00:11:07.230 "data_offset": 2048, 00:11:07.230 "data_size": 63488 00:11:07.230 }, 00:11:07.230 { 00:11:07.230 "name": "BaseBdev2", 00:11:07.230 "uuid": "13b6ca74-afd7-46ac-993a-399a370be780", 00:11:07.230 "is_configured": true, 00:11:07.230 "data_offset": 2048, 00:11:07.230 "data_size": 63488 00:11:07.230 }, 00:11:07.230 { 00:11:07.230 "name": "BaseBdev3", 00:11:07.230 "uuid": "b2d2b1fd-49f4-4338-a182-a014bfd6a8d2", 00:11:07.230 "is_configured": true, 00:11:07.230 "data_offset": 2048, 00:11:07.230 "data_size": 63488 00:11:07.230 } 00:11:07.230 ] 00:11:07.230 }' 00:11:07.230 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.230 04:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.502 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:07.502 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:07.502 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:07.502 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:07.502 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:07.502 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:07.502 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:07.502 04:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:07.502 04:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.502 04:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.502 [2024-11-27 04:28:04.009293] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.502 04:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.502 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:07.502 "name": "Existed_Raid", 00:11:07.502 "aliases": [ 00:11:07.502 "56af7a8b-b4f0-4128-8f11-0e21dfc00f96" 00:11:07.502 ], 00:11:07.502 "product_name": "Raid Volume", 00:11:07.503 "block_size": 512, 00:11:07.503 "num_blocks": 190464, 00:11:07.503 "uuid": "56af7a8b-b4f0-4128-8f11-0e21dfc00f96", 00:11:07.503 "assigned_rate_limits": { 00:11:07.503 "rw_ios_per_sec": 0, 00:11:07.503 "rw_mbytes_per_sec": 0, 00:11:07.503 "r_mbytes_per_sec": 0, 00:11:07.503 "w_mbytes_per_sec": 0 00:11:07.503 }, 00:11:07.503 "claimed": false, 00:11:07.503 "zoned": false, 00:11:07.503 "supported_io_types": { 00:11:07.503 "read": true, 00:11:07.503 "write": true, 00:11:07.503 "unmap": true, 00:11:07.503 "flush": true, 00:11:07.503 "reset": true, 00:11:07.503 "nvme_admin": false, 00:11:07.503 "nvme_io": false, 00:11:07.503 "nvme_io_md": false, 00:11:07.503 "write_zeroes": true, 00:11:07.503 "zcopy": false, 00:11:07.503 "get_zone_info": false, 00:11:07.503 "zone_management": false, 00:11:07.503 "zone_append": false, 00:11:07.503 "compare": false, 00:11:07.503 "compare_and_write": false, 00:11:07.503 "abort": false, 00:11:07.503 "seek_hole": false, 00:11:07.503 "seek_data": false, 00:11:07.503 "copy": false, 00:11:07.503 "nvme_iov_md": false 00:11:07.503 }, 00:11:07.503 "memory_domains": [ 00:11:07.503 { 00:11:07.503 "dma_device_id": "system", 00:11:07.503 "dma_device_type": 1 00:11:07.503 }, 00:11:07.503 { 00:11:07.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.503 "dma_device_type": 2 00:11:07.503 }, 00:11:07.503 { 00:11:07.503 "dma_device_id": "system", 00:11:07.503 "dma_device_type": 1 00:11:07.503 }, 00:11:07.503 { 00:11:07.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.503 "dma_device_type": 2 00:11:07.503 }, 00:11:07.503 { 00:11:07.503 "dma_device_id": "system", 00:11:07.503 "dma_device_type": 1 00:11:07.503 }, 00:11:07.503 { 00:11:07.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.503 "dma_device_type": 2 00:11:07.503 } 00:11:07.503 ], 00:11:07.503 "driver_specific": { 00:11:07.503 "raid": { 00:11:07.503 "uuid": "56af7a8b-b4f0-4128-8f11-0e21dfc00f96", 00:11:07.503 "strip_size_kb": 64, 00:11:07.503 "state": "online", 00:11:07.503 "raid_level": "concat", 00:11:07.503 "superblock": true, 00:11:07.503 "num_base_bdevs": 3, 00:11:07.503 "num_base_bdevs_discovered": 3, 00:11:07.503 "num_base_bdevs_operational": 3, 00:11:07.503 "base_bdevs_list": [ 00:11:07.503 { 00:11:07.503 "name": "BaseBdev1", 00:11:07.503 "uuid": "e187aaaf-7d7f-4b1b-929b-ee5b3710876c", 00:11:07.503 "is_configured": true, 00:11:07.503 "data_offset": 2048, 00:11:07.503 "data_size": 63488 00:11:07.503 }, 00:11:07.503 { 00:11:07.503 "name": "BaseBdev2", 00:11:07.503 "uuid": "13b6ca74-afd7-46ac-993a-399a370be780", 00:11:07.503 "is_configured": true, 00:11:07.503 "data_offset": 2048, 00:11:07.503 "data_size": 63488 00:11:07.503 }, 00:11:07.503 { 00:11:07.503 "name": "BaseBdev3", 00:11:07.503 "uuid": "b2d2b1fd-49f4-4338-a182-a014bfd6a8d2", 00:11:07.503 "is_configured": true, 00:11:07.503 "data_offset": 2048, 00:11:07.503 "data_size": 63488 00:11:07.503 } 00:11:07.503 ] 00:11:07.503 } 00:11:07.503 } 00:11:07.503 }' 00:11:07.503 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:07.763 BaseBdev2 00:11:07.763 BaseBdev3' 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.763 04:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.763 [2024-11-27 04:28:04.308498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:07.763 [2024-11-27 04:28:04.308674] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.763 [2024-11-27 04:28:04.308793] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:08.023 04:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.023 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:08.023 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:08.023 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:08.023 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:08.023 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:08.023 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:08.023 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.023 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:08.023 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.023 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.023 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:08.023 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.023 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.023 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.023 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.023 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.023 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.023 04:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.023 04:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.023 04:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.023 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.023 "name": "Existed_Raid", 00:11:08.023 "uuid": "56af7a8b-b4f0-4128-8f11-0e21dfc00f96", 00:11:08.023 "strip_size_kb": 64, 00:11:08.023 "state": "offline", 00:11:08.023 "raid_level": "concat", 00:11:08.023 "superblock": true, 00:11:08.023 "num_base_bdevs": 3, 00:11:08.023 "num_base_bdevs_discovered": 2, 00:11:08.023 "num_base_bdevs_operational": 2, 00:11:08.023 "base_bdevs_list": [ 00:11:08.023 { 00:11:08.023 "name": null, 00:11:08.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.023 "is_configured": false, 00:11:08.023 "data_offset": 0, 00:11:08.023 "data_size": 63488 00:11:08.023 }, 00:11:08.023 { 00:11:08.023 "name": "BaseBdev2", 00:11:08.023 "uuid": "13b6ca74-afd7-46ac-993a-399a370be780", 00:11:08.023 "is_configured": true, 00:11:08.023 "data_offset": 2048, 00:11:08.023 "data_size": 63488 00:11:08.023 }, 00:11:08.023 { 00:11:08.023 "name": "BaseBdev3", 00:11:08.023 "uuid": "b2d2b1fd-49f4-4338-a182-a014bfd6a8d2", 00:11:08.023 "is_configured": true, 00:11:08.023 "data_offset": 2048, 00:11:08.023 "data_size": 63488 00:11:08.023 } 00:11:08.023 ] 00:11:08.023 }' 00:11:08.023 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.023 04:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.589 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:08.589 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.589 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:08.589 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.589 04:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.589 04:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.589 04:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.589 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:08.589 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.589 04:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:08.589 04:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.589 04:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.589 [2024-11-27 04:28:04.996909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:08.589 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.589 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:08.589 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.589 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.589 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:08.589 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.589 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.589 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.848 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:08.848 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.848 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:08.848 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.848 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.848 [2024-11-27 04:28:05.189039] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:08.848 [2024-11-27 04:28:05.189160] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:08.848 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.848 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:08.848 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.848 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.848 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:08.848 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.848 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.848 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.848 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:08.848 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:08.848 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:08.848 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:08.848 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:08.848 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:08.848 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.848 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.848 BaseBdev2 00:11:09.106 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.106 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:09.106 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:09.106 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.106 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:09.106 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.106 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.106 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.106 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.106 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.106 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.107 [ 00:11:09.107 { 00:11:09.107 "name": "BaseBdev2", 00:11:09.107 "aliases": [ 00:11:09.107 "77ea5c0c-0a34-4578-8b89-90781e9bea71" 00:11:09.107 ], 00:11:09.107 "product_name": "Malloc disk", 00:11:09.107 "block_size": 512, 00:11:09.107 "num_blocks": 65536, 00:11:09.107 "uuid": "77ea5c0c-0a34-4578-8b89-90781e9bea71", 00:11:09.107 "assigned_rate_limits": { 00:11:09.107 "rw_ios_per_sec": 0, 00:11:09.107 "rw_mbytes_per_sec": 0, 00:11:09.107 "r_mbytes_per_sec": 0, 00:11:09.107 "w_mbytes_per_sec": 0 00:11:09.107 }, 00:11:09.107 "claimed": false, 00:11:09.107 "zoned": false, 00:11:09.107 "supported_io_types": { 00:11:09.107 "read": true, 00:11:09.107 "write": true, 00:11:09.107 "unmap": true, 00:11:09.107 "flush": true, 00:11:09.107 "reset": true, 00:11:09.107 "nvme_admin": false, 00:11:09.107 "nvme_io": false, 00:11:09.107 "nvme_io_md": false, 00:11:09.107 "write_zeroes": true, 00:11:09.107 "zcopy": true, 00:11:09.107 "get_zone_info": false, 00:11:09.107 "zone_management": false, 00:11:09.107 "zone_append": false, 00:11:09.107 "compare": false, 00:11:09.107 "compare_and_write": false, 00:11:09.107 "abort": true, 00:11:09.107 "seek_hole": false, 00:11:09.107 "seek_data": false, 00:11:09.107 "copy": true, 00:11:09.107 "nvme_iov_md": false 00:11:09.107 }, 00:11:09.107 "memory_domains": [ 00:11:09.107 { 00:11:09.107 "dma_device_id": "system", 00:11:09.107 "dma_device_type": 1 00:11:09.107 }, 00:11:09.107 { 00:11:09.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.107 "dma_device_type": 2 00:11:09.107 } 00:11:09.107 ], 00:11:09.107 "driver_specific": {} 00:11:09.107 } 00:11:09.107 ] 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.107 BaseBdev3 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.107 [ 00:11:09.107 { 00:11:09.107 "name": "BaseBdev3", 00:11:09.107 "aliases": [ 00:11:09.107 "b1997f95-67de-4d8e-a684-3818879b2a48" 00:11:09.107 ], 00:11:09.107 "product_name": "Malloc disk", 00:11:09.107 "block_size": 512, 00:11:09.107 "num_blocks": 65536, 00:11:09.107 "uuid": "b1997f95-67de-4d8e-a684-3818879b2a48", 00:11:09.107 "assigned_rate_limits": { 00:11:09.107 "rw_ios_per_sec": 0, 00:11:09.107 "rw_mbytes_per_sec": 0, 00:11:09.107 "r_mbytes_per_sec": 0, 00:11:09.107 "w_mbytes_per_sec": 0 00:11:09.107 }, 00:11:09.107 "claimed": false, 00:11:09.107 "zoned": false, 00:11:09.107 "supported_io_types": { 00:11:09.107 "read": true, 00:11:09.107 "write": true, 00:11:09.107 "unmap": true, 00:11:09.107 "flush": true, 00:11:09.107 "reset": true, 00:11:09.107 "nvme_admin": false, 00:11:09.107 "nvme_io": false, 00:11:09.107 "nvme_io_md": false, 00:11:09.107 "write_zeroes": true, 00:11:09.107 "zcopy": true, 00:11:09.107 "get_zone_info": false, 00:11:09.107 "zone_management": false, 00:11:09.107 "zone_append": false, 00:11:09.107 "compare": false, 00:11:09.107 "compare_and_write": false, 00:11:09.107 "abort": true, 00:11:09.107 "seek_hole": false, 00:11:09.107 "seek_data": false, 00:11:09.107 "copy": true, 00:11:09.107 "nvme_iov_md": false 00:11:09.107 }, 00:11:09.107 "memory_domains": [ 00:11:09.107 { 00:11:09.107 "dma_device_id": "system", 00:11:09.107 "dma_device_type": 1 00:11:09.107 }, 00:11:09.107 { 00:11:09.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.107 "dma_device_type": 2 00:11:09.107 } 00:11:09.107 ], 00:11:09.107 "driver_specific": {} 00:11:09.107 } 00:11:09.107 ] 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.107 [2024-11-27 04:28:05.574425] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:09.107 [2024-11-27 04:28:05.574601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:09.107 [2024-11-27 04:28:05.574675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.107 [2024-11-27 04:28:05.577493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.107 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.108 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.108 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.108 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:09.108 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.108 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.108 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.108 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.108 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.108 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.108 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.108 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.108 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.108 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.108 "name": "Existed_Raid", 00:11:09.108 "uuid": "9f868da5-46ae-4e79-b492-737af9086ab8", 00:11:09.108 "strip_size_kb": 64, 00:11:09.108 "state": "configuring", 00:11:09.108 "raid_level": "concat", 00:11:09.108 "superblock": true, 00:11:09.108 "num_base_bdevs": 3, 00:11:09.108 "num_base_bdevs_discovered": 2, 00:11:09.108 "num_base_bdevs_operational": 3, 00:11:09.108 "base_bdevs_list": [ 00:11:09.108 { 00:11:09.108 "name": "BaseBdev1", 00:11:09.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.108 "is_configured": false, 00:11:09.108 "data_offset": 0, 00:11:09.108 "data_size": 0 00:11:09.108 }, 00:11:09.108 { 00:11:09.108 "name": "BaseBdev2", 00:11:09.108 "uuid": "77ea5c0c-0a34-4578-8b89-90781e9bea71", 00:11:09.108 "is_configured": true, 00:11:09.108 "data_offset": 2048, 00:11:09.108 "data_size": 63488 00:11:09.108 }, 00:11:09.108 { 00:11:09.108 "name": "BaseBdev3", 00:11:09.108 "uuid": "b1997f95-67de-4d8e-a684-3818879b2a48", 00:11:09.108 "is_configured": true, 00:11:09.108 "data_offset": 2048, 00:11:09.108 "data_size": 63488 00:11:09.108 } 00:11:09.108 ] 00:11:09.108 }' 00:11:09.108 04:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.108 04:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.674 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:09.674 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.674 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.674 [2024-11-27 04:28:06.081707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:09.674 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.674 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:09.674 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.674 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.674 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.674 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.674 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:09.674 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.674 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.674 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.674 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.674 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.674 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.674 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.674 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.674 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.674 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.674 "name": "Existed_Raid", 00:11:09.674 "uuid": "9f868da5-46ae-4e79-b492-737af9086ab8", 00:11:09.674 "strip_size_kb": 64, 00:11:09.674 "state": "configuring", 00:11:09.674 "raid_level": "concat", 00:11:09.674 "superblock": true, 00:11:09.674 "num_base_bdevs": 3, 00:11:09.674 "num_base_bdevs_discovered": 1, 00:11:09.674 "num_base_bdevs_operational": 3, 00:11:09.674 "base_bdevs_list": [ 00:11:09.674 { 00:11:09.674 "name": "BaseBdev1", 00:11:09.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.674 "is_configured": false, 00:11:09.674 "data_offset": 0, 00:11:09.674 "data_size": 0 00:11:09.674 }, 00:11:09.674 { 00:11:09.674 "name": null, 00:11:09.674 "uuid": "77ea5c0c-0a34-4578-8b89-90781e9bea71", 00:11:09.674 "is_configured": false, 00:11:09.674 "data_offset": 0, 00:11:09.674 "data_size": 63488 00:11:09.674 }, 00:11:09.674 { 00:11:09.674 "name": "BaseBdev3", 00:11:09.674 "uuid": "b1997f95-67de-4d8e-a684-3818879b2a48", 00:11:09.674 "is_configured": true, 00:11:09.674 "data_offset": 2048, 00:11:09.674 "data_size": 63488 00:11:09.674 } 00:11:09.674 ] 00:11:09.674 }' 00:11:09.674 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.674 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.241 [2024-11-27 04:28:06.629033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:10.241 BaseBdev1 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.241 [ 00:11:10.241 { 00:11:10.241 "name": "BaseBdev1", 00:11:10.241 "aliases": [ 00:11:10.241 "5b2c9471-bcb7-4fd0-b0bf-c1927883b373" 00:11:10.241 ], 00:11:10.241 "product_name": "Malloc disk", 00:11:10.241 "block_size": 512, 00:11:10.241 "num_blocks": 65536, 00:11:10.241 "uuid": "5b2c9471-bcb7-4fd0-b0bf-c1927883b373", 00:11:10.241 "assigned_rate_limits": { 00:11:10.241 "rw_ios_per_sec": 0, 00:11:10.241 "rw_mbytes_per_sec": 0, 00:11:10.241 "r_mbytes_per_sec": 0, 00:11:10.241 "w_mbytes_per_sec": 0 00:11:10.241 }, 00:11:10.241 "claimed": true, 00:11:10.241 "claim_type": "exclusive_write", 00:11:10.241 "zoned": false, 00:11:10.241 "supported_io_types": { 00:11:10.241 "read": true, 00:11:10.241 "write": true, 00:11:10.241 "unmap": true, 00:11:10.241 "flush": true, 00:11:10.241 "reset": true, 00:11:10.241 "nvme_admin": false, 00:11:10.241 "nvme_io": false, 00:11:10.241 "nvme_io_md": false, 00:11:10.241 "write_zeroes": true, 00:11:10.241 "zcopy": true, 00:11:10.241 "get_zone_info": false, 00:11:10.241 "zone_management": false, 00:11:10.241 "zone_append": false, 00:11:10.241 "compare": false, 00:11:10.241 "compare_and_write": false, 00:11:10.241 "abort": true, 00:11:10.241 "seek_hole": false, 00:11:10.241 "seek_data": false, 00:11:10.241 "copy": true, 00:11:10.241 "nvme_iov_md": false 00:11:10.241 }, 00:11:10.241 "memory_domains": [ 00:11:10.241 { 00:11:10.241 "dma_device_id": "system", 00:11:10.241 "dma_device_type": 1 00:11:10.241 }, 00:11:10.241 { 00:11:10.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.241 "dma_device_type": 2 00:11:10.241 } 00:11:10.241 ], 00:11:10.241 "driver_specific": {} 00:11:10.241 } 00:11:10.241 ] 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.241 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.242 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.242 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.242 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.242 "name": "Existed_Raid", 00:11:10.242 "uuid": "9f868da5-46ae-4e79-b492-737af9086ab8", 00:11:10.242 "strip_size_kb": 64, 00:11:10.242 "state": "configuring", 00:11:10.242 "raid_level": "concat", 00:11:10.242 "superblock": true, 00:11:10.242 "num_base_bdevs": 3, 00:11:10.242 "num_base_bdevs_discovered": 2, 00:11:10.242 "num_base_bdevs_operational": 3, 00:11:10.242 "base_bdevs_list": [ 00:11:10.242 { 00:11:10.242 "name": "BaseBdev1", 00:11:10.242 "uuid": "5b2c9471-bcb7-4fd0-b0bf-c1927883b373", 00:11:10.242 "is_configured": true, 00:11:10.242 "data_offset": 2048, 00:11:10.242 "data_size": 63488 00:11:10.242 }, 00:11:10.242 { 00:11:10.242 "name": null, 00:11:10.242 "uuid": "77ea5c0c-0a34-4578-8b89-90781e9bea71", 00:11:10.242 "is_configured": false, 00:11:10.242 "data_offset": 0, 00:11:10.242 "data_size": 63488 00:11:10.242 }, 00:11:10.242 { 00:11:10.242 "name": "BaseBdev3", 00:11:10.242 "uuid": "b1997f95-67de-4d8e-a684-3818879b2a48", 00:11:10.242 "is_configured": true, 00:11:10.242 "data_offset": 2048, 00:11:10.242 "data_size": 63488 00:11:10.242 } 00:11:10.242 ] 00:11:10.242 }' 00:11:10.242 04:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.242 04:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.808 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:10.808 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.808 04:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.808 04:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.808 04:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.808 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:10.808 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:10.808 04:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.808 04:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.808 [2024-11-27 04:28:07.208231] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:10.808 04:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.808 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:10.808 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.808 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.808 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.808 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.808 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.808 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.808 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.808 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.808 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.808 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.809 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.809 04:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.809 04:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.809 04:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.809 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.809 "name": "Existed_Raid", 00:11:10.809 "uuid": "9f868da5-46ae-4e79-b492-737af9086ab8", 00:11:10.809 "strip_size_kb": 64, 00:11:10.809 "state": "configuring", 00:11:10.809 "raid_level": "concat", 00:11:10.809 "superblock": true, 00:11:10.809 "num_base_bdevs": 3, 00:11:10.809 "num_base_bdevs_discovered": 1, 00:11:10.809 "num_base_bdevs_operational": 3, 00:11:10.809 "base_bdevs_list": [ 00:11:10.809 { 00:11:10.809 "name": "BaseBdev1", 00:11:10.809 "uuid": "5b2c9471-bcb7-4fd0-b0bf-c1927883b373", 00:11:10.809 "is_configured": true, 00:11:10.809 "data_offset": 2048, 00:11:10.809 "data_size": 63488 00:11:10.809 }, 00:11:10.809 { 00:11:10.809 "name": null, 00:11:10.809 "uuid": "77ea5c0c-0a34-4578-8b89-90781e9bea71", 00:11:10.809 "is_configured": false, 00:11:10.809 "data_offset": 0, 00:11:10.809 "data_size": 63488 00:11:10.809 }, 00:11:10.809 { 00:11:10.809 "name": null, 00:11:10.809 "uuid": "b1997f95-67de-4d8e-a684-3818879b2a48", 00:11:10.809 "is_configured": false, 00:11:10.809 "data_offset": 0, 00:11:10.809 "data_size": 63488 00:11:10.809 } 00:11:10.809 ] 00:11:10.809 }' 00:11:10.809 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.809 04:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.375 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.375 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:11.375 04:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.375 04:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.375 04:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.375 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:11.375 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:11.375 04:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.375 04:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.375 [2024-11-27 04:28:07.723803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:11.375 04:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.375 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:11.375 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.375 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.375 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.375 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.375 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:11.375 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.375 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.375 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.375 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.375 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.375 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.375 04:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.376 04:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.376 04:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.376 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.376 "name": "Existed_Raid", 00:11:11.376 "uuid": "9f868da5-46ae-4e79-b492-737af9086ab8", 00:11:11.376 "strip_size_kb": 64, 00:11:11.376 "state": "configuring", 00:11:11.376 "raid_level": "concat", 00:11:11.376 "superblock": true, 00:11:11.376 "num_base_bdevs": 3, 00:11:11.376 "num_base_bdevs_discovered": 2, 00:11:11.376 "num_base_bdevs_operational": 3, 00:11:11.376 "base_bdevs_list": [ 00:11:11.376 { 00:11:11.376 "name": "BaseBdev1", 00:11:11.376 "uuid": "5b2c9471-bcb7-4fd0-b0bf-c1927883b373", 00:11:11.376 "is_configured": true, 00:11:11.376 "data_offset": 2048, 00:11:11.376 "data_size": 63488 00:11:11.376 }, 00:11:11.376 { 00:11:11.376 "name": null, 00:11:11.376 "uuid": "77ea5c0c-0a34-4578-8b89-90781e9bea71", 00:11:11.376 "is_configured": false, 00:11:11.376 "data_offset": 0, 00:11:11.376 "data_size": 63488 00:11:11.376 }, 00:11:11.376 { 00:11:11.376 "name": "BaseBdev3", 00:11:11.376 "uuid": "b1997f95-67de-4d8e-a684-3818879b2a48", 00:11:11.376 "is_configured": true, 00:11:11.376 "data_offset": 2048, 00:11:11.376 "data_size": 63488 00:11:11.376 } 00:11:11.376 ] 00:11:11.376 }' 00:11:11.376 04:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.376 04:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.635 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.635 04:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.635 04:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.635 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:11.635 04:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.635 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:11.635 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:11.635 04:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.635 04:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.635 [2024-11-27 04:28:08.211181] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:11.893 04:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.893 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:11.893 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.893 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.893 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.893 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.893 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:11.893 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.893 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.893 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.893 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.893 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.893 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.893 04:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.893 04:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.893 04:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.893 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.893 "name": "Existed_Raid", 00:11:11.893 "uuid": "9f868da5-46ae-4e79-b492-737af9086ab8", 00:11:11.893 "strip_size_kb": 64, 00:11:11.893 "state": "configuring", 00:11:11.893 "raid_level": "concat", 00:11:11.893 "superblock": true, 00:11:11.893 "num_base_bdevs": 3, 00:11:11.893 "num_base_bdevs_discovered": 1, 00:11:11.893 "num_base_bdevs_operational": 3, 00:11:11.893 "base_bdevs_list": [ 00:11:11.893 { 00:11:11.893 "name": null, 00:11:11.893 "uuid": "5b2c9471-bcb7-4fd0-b0bf-c1927883b373", 00:11:11.893 "is_configured": false, 00:11:11.893 "data_offset": 0, 00:11:11.893 "data_size": 63488 00:11:11.893 }, 00:11:11.893 { 00:11:11.893 "name": null, 00:11:11.893 "uuid": "77ea5c0c-0a34-4578-8b89-90781e9bea71", 00:11:11.893 "is_configured": false, 00:11:11.893 "data_offset": 0, 00:11:11.893 "data_size": 63488 00:11:11.893 }, 00:11:11.893 { 00:11:11.893 "name": "BaseBdev3", 00:11:11.893 "uuid": "b1997f95-67de-4d8e-a684-3818879b2a48", 00:11:11.893 "is_configured": true, 00:11:11.893 "data_offset": 2048, 00:11:11.893 "data_size": 63488 00:11:11.893 } 00:11:11.893 ] 00:11:11.893 }' 00:11:11.893 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.893 04:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.460 [2024-11-27 04:28:08.882525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.460 "name": "Existed_Raid", 00:11:12.460 "uuid": "9f868da5-46ae-4e79-b492-737af9086ab8", 00:11:12.460 "strip_size_kb": 64, 00:11:12.460 "state": "configuring", 00:11:12.460 "raid_level": "concat", 00:11:12.460 "superblock": true, 00:11:12.460 "num_base_bdevs": 3, 00:11:12.460 "num_base_bdevs_discovered": 2, 00:11:12.460 "num_base_bdevs_operational": 3, 00:11:12.460 "base_bdevs_list": [ 00:11:12.460 { 00:11:12.460 "name": null, 00:11:12.460 "uuid": "5b2c9471-bcb7-4fd0-b0bf-c1927883b373", 00:11:12.460 "is_configured": false, 00:11:12.460 "data_offset": 0, 00:11:12.460 "data_size": 63488 00:11:12.460 }, 00:11:12.460 { 00:11:12.460 "name": "BaseBdev2", 00:11:12.460 "uuid": "77ea5c0c-0a34-4578-8b89-90781e9bea71", 00:11:12.460 "is_configured": true, 00:11:12.460 "data_offset": 2048, 00:11:12.460 "data_size": 63488 00:11:12.460 }, 00:11:12.460 { 00:11:12.460 "name": "BaseBdev3", 00:11:12.460 "uuid": "b1997f95-67de-4d8e-a684-3818879b2a48", 00:11:12.460 "is_configured": true, 00:11:12.460 "data_offset": 2048, 00:11:12.460 "data_size": 63488 00:11:12.460 } 00:11:12.460 ] 00:11:12.460 }' 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.460 04:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.718 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:12.718 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.718 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.718 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.975 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.975 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:12.975 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.975 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.975 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.975 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:12.975 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.975 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5b2c9471-bcb7-4fd0-b0bf-c1927883b373 00:11:12.975 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.975 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.975 [2024-11-27 04:28:09.429372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:12.975 [2024-11-27 04:28:09.429718] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:12.975 [2024-11-27 04:28:09.429739] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:12.975 [2024-11-27 04:28:09.430108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:12.975 NewBaseBdev 00:11:12.975 [2024-11-27 04:28:09.430335] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:12.975 [2024-11-27 04:28:09.430348] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:12.975 [2024-11-27 04:28:09.430544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.975 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.975 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:12.975 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:12.975 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.975 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:12.975 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.975 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.975 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.975 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.975 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.975 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.975 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:12.975 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.975 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.975 [ 00:11:12.975 { 00:11:12.975 "name": "NewBaseBdev", 00:11:12.975 "aliases": [ 00:11:12.975 "5b2c9471-bcb7-4fd0-b0bf-c1927883b373" 00:11:12.975 ], 00:11:12.975 "product_name": "Malloc disk", 00:11:12.975 "block_size": 512, 00:11:12.975 "num_blocks": 65536, 00:11:12.975 "uuid": "5b2c9471-bcb7-4fd0-b0bf-c1927883b373", 00:11:12.975 "assigned_rate_limits": { 00:11:12.975 "rw_ios_per_sec": 0, 00:11:12.975 "rw_mbytes_per_sec": 0, 00:11:12.975 "r_mbytes_per_sec": 0, 00:11:12.975 "w_mbytes_per_sec": 0 00:11:12.975 }, 00:11:12.975 "claimed": true, 00:11:12.975 "claim_type": "exclusive_write", 00:11:12.975 "zoned": false, 00:11:12.975 "supported_io_types": { 00:11:12.975 "read": true, 00:11:12.975 "write": true, 00:11:12.975 "unmap": true, 00:11:12.975 "flush": true, 00:11:12.976 "reset": true, 00:11:12.976 "nvme_admin": false, 00:11:12.976 "nvme_io": false, 00:11:12.976 "nvme_io_md": false, 00:11:12.976 "write_zeroes": true, 00:11:12.976 "zcopy": true, 00:11:12.976 "get_zone_info": false, 00:11:12.976 "zone_management": false, 00:11:12.976 "zone_append": false, 00:11:12.976 "compare": false, 00:11:12.976 "compare_and_write": false, 00:11:12.976 "abort": true, 00:11:12.976 "seek_hole": false, 00:11:12.976 "seek_data": false, 00:11:12.976 "copy": true, 00:11:12.976 "nvme_iov_md": false 00:11:12.976 }, 00:11:12.976 "memory_domains": [ 00:11:12.976 { 00:11:12.976 "dma_device_id": "system", 00:11:12.976 "dma_device_type": 1 00:11:12.976 }, 00:11:12.976 { 00:11:12.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.976 "dma_device_type": 2 00:11:12.976 } 00:11:12.976 ], 00:11:12.976 "driver_specific": {} 00:11:12.976 } 00:11:12.976 ] 00:11:12.976 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.976 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:12.976 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:12.976 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.976 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.976 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.976 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.976 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:12.976 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.976 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.976 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.976 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.976 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.976 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.976 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.976 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.976 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.976 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.976 "name": "Existed_Raid", 00:11:12.976 "uuid": "9f868da5-46ae-4e79-b492-737af9086ab8", 00:11:12.976 "strip_size_kb": 64, 00:11:12.976 "state": "online", 00:11:12.976 "raid_level": "concat", 00:11:12.976 "superblock": true, 00:11:12.976 "num_base_bdevs": 3, 00:11:12.976 "num_base_bdevs_discovered": 3, 00:11:12.976 "num_base_bdevs_operational": 3, 00:11:12.976 "base_bdevs_list": [ 00:11:12.976 { 00:11:12.976 "name": "NewBaseBdev", 00:11:12.976 "uuid": "5b2c9471-bcb7-4fd0-b0bf-c1927883b373", 00:11:12.976 "is_configured": true, 00:11:12.976 "data_offset": 2048, 00:11:12.976 "data_size": 63488 00:11:12.976 }, 00:11:12.976 { 00:11:12.976 "name": "BaseBdev2", 00:11:12.976 "uuid": "77ea5c0c-0a34-4578-8b89-90781e9bea71", 00:11:12.976 "is_configured": true, 00:11:12.976 "data_offset": 2048, 00:11:12.976 "data_size": 63488 00:11:12.976 }, 00:11:12.976 { 00:11:12.976 "name": "BaseBdev3", 00:11:12.976 "uuid": "b1997f95-67de-4d8e-a684-3818879b2a48", 00:11:12.976 "is_configured": true, 00:11:12.976 "data_offset": 2048, 00:11:12.976 "data_size": 63488 00:11:12.976 } 00:11:12.976 ] 00:11:12.976 }' 00:11:12.976 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.976 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.540 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:13.540 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:13.540 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:13.540 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:13.540 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:13.540 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:13.540 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:13.540 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.540 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.540 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:13.540 [2024-11-27 04:28:09.925061] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.540 04:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.540 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:13.540 "name": "Existed_Raid", 00:11:13.540 "aliases": [ 00:11:13.540 "9f868da5-46ae-4e79-b492-737af9086ab8" 00:11:13.540 ], 00:11:13.540 "product_name": "Raid Volume", 00:11:13.540 "block_size": 512, 00:11:13.540 "num_blocks": 190464, 00:11:13.540 "uuid": "9f868da5-46ae-4e79-b492-737af9086ab8", 00:11:13.540 "assigned_rate_limits": { 00:11:13.540 "rw_ios_per_sec": 0, 00:11:13.540 "rw_mbytes_per_sec": 0, 00:11:13.540 "r_mbytes_per_sec": 0, 00:11:13.540 "w_mbytes_per_sec": 0 00:11:13.540 }, 00:11:13.540 "claimed": false, 00:11:13.540 "zoned": false, 00:11:13.540 "supported_io_types": { 00:11:13.540 "read": true, 00:11:13.540 "write": true, 00:11:13.540 "unmap": true, 00:11:13.540 "flush": true, 00:11:13.540 "reset": true, 00:11:13.540 "nvme_admin": false, 00:11:13.540 "nvme_io": false, 00:11:13.540 "nvme_io_md": false, 00:11:13.540 "write_zeroes": true, 00:11:13.540 "zcopy": false, 00:11:13.540 "get_zone_info": false, 00:11:13.540 "zone_management": false, 00:11:13.540 "zone_append": false, 00:11:13.540 "compare": false, 00:11:13.540 "compare_and_write": false, 00:11:13.540 "abort": false, 00:11:13.540 "seek_hole": false, 00:11:13.540 "seek_data": false, 00:11:13.540 "copy": false, 00:11:13.540 "nvme_iov_md": false 00:11:13.540 }, 00:11:13.540 "memory_domains": [ 00:11:13.540 { 00:11:13.540 "dma_device_id": "system", 00:11:13.540 "dma_device_type": 1 00:11:13.540 }, 00:11:13.540 { 00:11:13.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.540 "dma_device_type": 2 00:11:13.540 }, 00:11:13.540 { 00:11:13.540 "dma_device_id": "system", 00:11:13.540 "dma_device_type": 1 00:11:13.540 }, 00:11:13.540 { 00:11:13.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.540 "dma_device_type": 2 00:11:13.540 }, 00:11:13.540 { 00:11:13.540 "dma_device_id": "system", 00:11:13.540 "dma_device_type": 1 00:11:13.540 }, 00:11:13.540 { 00:11:13.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.540 "dma_device_type": 2 00:11:13.540 } 00:11:13.540 ], 00:11:13.540 "driver_specific": { 00:11:13.540 "raid": { 00:11:13.540 "uuid": "9f868da5-46ae-4e79-b492-737af9086ab8", 00:11:13.540 "strip_size_kb": 64, 00:11:13.540 "state": "online", 00:11:13.540 "raid_level": "concat", 00:11:13.540 "superblock": true, 00:11:13.540 "num_base_bdevs": 3, 00:11:13.540 "num_base_bdevs_discovered": 3, 00:11:13.540 "num_base_bdevs_operational": 3, 00:11:13.540 "base_bdevs_list": [ 00:11:13.540 { 00:11:13.540 "name": "NewBaseBdev", 00:11:13.540 "uuid": "5b2c9471-bcb7-4fd0-b0bf-c1927883b373", 00:11:13.540 "is_configured": true, 00:11:13.540 "data_offset": 2048, 00:11:13.540 "data_size": 63488 00:11:13.540 }, 00:11:13.540 { 00:11:13.541 "name": "BaseBdev2", 00:11:13.541 "uuid": "77ea5c0c-0a34-4578-8b89-90781e9bea71", 00:11:13.541 "is_configured": true, 00:11:13.541 "data_offset": 2048, 00:11:13.541 "data_size": 63488 00:11:13.541 }, 00:11:13.541 { 00:11:13.541 "name": "BaseBdev3", 00:11:13.541 "uuid": "b1997f95-67de-4d8e-a684-3818879b2a48", 00:11:13.541 "is_configured": true, 00:11:13.541 "data_offset": 2048, 00:11:13.541 "data_size": 63488 00:11:13.541 } 00:11:13.541 ] 00:11:13.541 } 00:11:13.541 } 00:11:13.541 }' 00:11:13.541 04:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:13.541 04:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:13.541 BaseBdev2 00:11:13.541 BaseBdev3' 00:11:13.541 04:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.541 04:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:13.541 04:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.541 04:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:13.541 04:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.541 04:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.541 04:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.541 04:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.541 04:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.541 04:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.541 04:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.541 04:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:13.541 04:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.541 04:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.541 04:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.798 04:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.798 04:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.798 04:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.798 04:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.798 04:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:13.798 04:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.798 04:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.798 04:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.798 04:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.798 04:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.798 04:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.798 04:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:13.798 04:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.798 04:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.798 [2024-11-27 04:28:10.216364] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:13.798 [2024-11-27 04:28:10.216509] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:13.798 [2024-11-27 04:28:10.216683] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:13.798 [2024-11-27 04:28:10.216796] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:13.798 [2024-11-27 04:28:10.216855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:13.798 04:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.798 04:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66448 00:11:13.798 04:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66448 ']' 00:11:13.798 04:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66448 00:11:13.798 04:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:13.798 04:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:13.798 04:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66448 00:11:13.798 killing process with pid 66448 00:11:13.798 04:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:13.798 04:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:13.798 04:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66448' 00:11:13.798 04:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66448 00:11:13.798 04:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66448 00:11:13.798 [2024-11-27 04:28:10.266905] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:14.364 [2024-11-27 04:28:10.677147] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:15.733 04:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:15.733 00:11:15.733 real 0m11.767s 00:11:15.733 user 0m18.267s 00:11:15.733 sys 0m2.014s 00:11:15.733 ************************************ 00:11:15.733 END TEST raid_state_function_test_sb 00:11:15.733 ************************************ 00:11:15.733 04:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.733 04:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.733 04:28:12 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:11:15.733 04:28:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:15.733 04:28:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.733 04:28:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:15.733 ************************************ 00:11:15.733 START TEST raid_superblock_test 00:11:15.733 ************************************ 00:11:15.733 04:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:11:15.733 04:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:15.733 04:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:15.733 04:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:15.733 04:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:15.733 04:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:15.733 04:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:15.733 04:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:15.733 04:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:15.733 04:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:15.733 04:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:15.733 04:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:15.733 04:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:15.733 04:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:15.733 04:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:15.734 04:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:15.734 04:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:15.734 04:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67079 00:11:15.734 04:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:15.734 04:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67079 00:11:15.734 04:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 67079 ']' 00:11:15.734 04:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.734 04:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.734 04:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.734 04:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.734 04:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.991 [2024-11-27 04:28:12.346129] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:15.991 [2024-11-27 04:28:12.346260] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67079 ] 00:11:15.991 [2024-11-27 04:28:12.505683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.250 [2024-11-27 04:28:12.672914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.508 [2024-11-27 04:28:12.951153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.508 [2024-11-27 04:28:12.951250] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.832 malloc1 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.832 [2024-11-27 04:28:13.324979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:16.832 [2024-11-27 04:28:13.325178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.832 [2024-11-27 04:28:13.325232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:16.832 [2024-11-27 04:28:13.325272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.832 [2024-11-27 04:28:13.328253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.832 [2024-11-27 04:28:13.328358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:16.832 pt1 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.832 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.832 malloc2 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.117 [2024-11-27 04:28:13.399296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:17.117 [2024-11-27 04:28:13.399490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.117 [2024-11-27 04:28:13.399576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:17.117 [2024-11-27 04:28:13.399632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.117 [2024-11-27 04:28:13.402637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.117 [2024-11-27 04:28:13.402745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:17.117 pt2 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.117 malloc3 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.117 [2024-11-27 04:28:13.480311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:17.117 [2024-11-27 04:28:13.480485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.117 [2024-11-27 04:28:13.480542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:17.117 [2024-11-27 04:28:13.480590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.117 [2024-11-27 04:28:13.483693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.117 [2024-11-27 04:28:13.483809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:17.117 pt3 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.117 [2024-11-27 04:28:13.492427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:17.117 [2024-11-27 04:28:13.494996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:17.117 [2024-11-27 04:28:13.495164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:17.117 [2024-11-27 04:28:13.495397] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:17.117 [2024-11-27 04:28:13.495415] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:17.117 [2024-11-27 04:28:13.495810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:17.117 [2024-11-27 04:28:13.496035] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:17.117 [2024-11-27 04:28:13.496046] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:17.117 [2024-11-27 04:28:13.496352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.117 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.117 "name": "raid_bdev1", 00:11:17.117 "uuid": "e1ae3769-be2f-494a-b872-d03f7395d81c", 00:11:17.117 "strip_size_kb": 64, 00:11:17.117 "state": "online", 00:11:17.117 "raid_level": "concat", 00:11:17.117 "superblock": true, 00:11:17.117 "num_base_bdevs": 3, 00:11:17.117 "num_base_bdevs_discovered": 3, 00:11:17.117 "num_base_bdevs_operational": 3, 00:11:17.117 "base_bdevs_list": [ 00:11:17.117 { 00:11:17.117 "name": "pt1", 00:11:17.117 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:17.117 "is_configured": true, 00:11:17.117 "data_offset": 2048, 00:11:17.117 "data_size": 63488 00:11:17.117 }, 00:11:17.117 { 00:11:17.118 "name": "pt2", 00:11:17.118 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:17.118 "is_configured": true, 00:11:17.118 "data_offset": 2048, 00:11:17.118 "data_size": 63488 00:11:17.118 }, 00:11:17.118 { 00:11:17.118 "name": "pt3", 00:11:17.118 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:17.118 "is_configured": true, 00:11:17.118 "data_offset": 2048, 00:11:17.118 "data_size": 63488 00:11:17.118 } 00:11:17.118 ] 00:11:17.118 }' 00:11:17.118 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.118 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.688 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:17.688 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:17.688 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:17.688 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:17.688 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:17.688 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:17.688 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:17.688 04:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:17.688 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.688 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.688 [2024-11-27 04:28:13.980134] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:17.688 04:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:17.688 "name": "raid_bdev1", 00:11:17.688 "aliases": [ 00:11:17.688 "e1ae3769-be2f-494a-b872-d03f7395d81c" 00:11:17.688 ], 00:11:17.688 "product_name": "Raid Volume", 00:11:17.688 "block_size": 512, 00:11:17.688 "num_blocks": 190464, 00:11:17.688 "uuid": "e1ae3769-be2f-494a-b872-d03f7395d81c", 00:11:17.688 "assigned_rate_limits": { 00:11:17.688 "rw_ios_per_sec": 0, 00:11:17.688 "rw_mbytes_per_sec": 0, 00:11:17.688 "r_mbytes_per_sec": 0, 00:11:17.688 "w_mbytes_per_sec": 0 00:11:17.688 }, 00:11:17.688 "claimed": false, 00:11:17.688 "zoned": false, 00:11:17.688 "supported_io_types": { 00:11:17.688 "read": true, 00:11:17.688 "write": true, 00:11:17.688 "unmap": true, 00:11:17.688 "flush": true, 00:11:17.688 "reset": true, 00:11:17.688 "nvme_admin": false, 00:11:17.688 "nvme_io": false, 00:11:17.688 "nvme_io_md": false, 00:11:17.688 "write_zeroes": true, 00:11:17.688 "zcopy": false, 00:11:17.688 "get_zone_info": false, 00:11:17.688 "zone_management": false, 00:11:17.688 "zone_append": false, 00:11:17.688 "compare": false, 00:11:17.688 "compare_and_write": false, 00:11:17.688 "abort": false, 00:11:17.688 "seek_hole": false, 00:11:17.688 "seek_data": false, 00:11:17.688 "copy": false, 00:11:17.688 "nvme_iov_md": false 00:11:17.688 }, 00:11:17.688 "memory_domains": [ 00:11:17.688 { 00:11:17.688 "dma_device_id": "system", 00:11:17.688 "dma_device_type": 1 00:11:17.688 }, 00:11:17.688 { 00:11:17.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.688 "dma_device_type": 2 00:11:17.688 }, 00:11:17.688 { 00:11:17.688 "dma_device_id": "system", 00:11:17.688 "dma_device_type": 1 00:11:17.688 }, 00:11:17.688 { 00:11:17.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.688 "dma_device_type": 2 00:11:17.688 }, 00:11:17.688 { 00:11:17.688 "dma_device_id": "system", 00:11:17.688 "dma_device_type": 1 00:11:17.688 }, 00:11:17.688 { 00:11:17.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.688 "dma_device_type": 2 00:11:17.688 } 00:11:17.688 ], 00:11:17.688 "driver_specific": { 00:11:17.688 "raid": { 00:11:17.688 "uuid": "e1ae3769-be2f-494a-b872-d03f7395d81c", 00:11:17.688 "strip_size_kb": 64, 00:11:17.688 "state": "online", 00:11:17.688 "raid_level": "concat", 00:11:17.688 "superblock": true, 00:11:17.688 "num_base_bdevs": 3, 00:11:17.688 "num_base_bdevs_discovered": 3, 00:11:17.688 "num_base_bdevs_operational": 3, 00:11:17.688 "base_bdevs_list": [ 00:11:17.688 { 00:11:17.688 "name": "pt1", 00:11:17.688 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:17.688 "is_configured": true, 00:11:17.688 "data_offset": 2048, 00:11:17.688 "data_size": 63488 00:11:17.688 }, 00:11:17.688 { 00:11:17.688 "name": "pt2", 00:11:17.688 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:17.688 "is_configured": true, 00:11:17.688 "data_offset": 2048, 00:11:17.688 "data_size": 63488 00:11:17.688 }, 00:11:17.688 { 00:11:17.688 "name": "pt3", 00:11:17.688 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:17.688 "is_configured": true, 00:11:17.688 "data_offset": 2048, 00:11:17.688 "data_size": 63488 00:11:17.688 } 00:11:17.688 ] 00:11:17.688 } 00:11:17.688 } 00:11:17.688 }' 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:17.688 pt2 00:11:17.688 pt3' 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.688 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:17.688 [2024-11-27 04:28:14.264065] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e1ae3769-be2f-494a-b872-d03f7395d81c 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e1ae3769-be2f-494a-b872-d03f7395d81c ']' 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.948 [2024-11-27 04:28:14.323717] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:17.948 [2024-11-27 04:28:14.323892] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:17.948 [2024-11-27 04:28:14.324036] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.948 [2024-11-27 04:28:14.324149] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:17.948 [2024-11-27 04:28:14.324163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.948 [2024-11-27 04:28:14.475857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:17.948 [2024-11-27 04:28:14.478544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:17.948 [2024-11-27 04:28:14.478739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:17.948 [2024-11-27 04:28:14.478825] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:17.948 [2024-11-27 04:28:14.478902] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:17.948 [2024-11-27 04:28:14.478928] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:17.948 [2024-11-27 04:28:14.478950] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:17.948 [2024-11-27 04:28:14.478962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:17.948 request: 00:11:17.948 { 00:11:17.948 "name": "raid_bdev1", 00:11:17.948 "raid_level": "concat", 00:11:17.948 "base_bdevs": [ 00:11:17.948 "malloc1", 00:11:17.948 "malloc2", 00:11:17.948 "malloc3" 00:11:17.948 ], 00:11:17.948 "strip_size_kb": 64, 00:11:17.948 "superblock": false, 00:11:17.948 "method": "bdev_raid_create", 00:11:17.948 "req_id": 1 00:11:17.948 } 00:11:17.948 Got JSON-RPC error response 00:11:17.948 response: 00:11:17.948 { 00:11:17.948 "code": -17, 00:11:17.948 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:17.948 } 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:17.948 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.207 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:18.207 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:18.207 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:18.207 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.207 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.207 [2024-11-27 04:28:14.547833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:18.207 [2024-11-27 04:28:14.547941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.207 [2024-11-27 04:28:14.547969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:18.207 [2024-11-27 04:28:14.547982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.207 [2024-11-27 04:28:14.551098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.207 [2024-11-27 04:28:14.551157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:18.207 [2024-11-27 04:28:14.551293] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:18.208 [2024-11-27 04:28:14.551377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:18.208 pt1 00:11:18.208 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.208 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:18.208 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.208 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.208 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.208 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.208 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.208 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.208 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.208 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.208 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.208 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.208 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.208 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.208 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.208 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.208 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.208 "name": "raid_bdev1", 00:11:18.208 "uuid": "e1ae3769-be2f-494a-b872-d03f7395d81c", 00:11:18.208 "strip_size_kb": 64, 00:11:18.208 "state": "configuring", 00:11:18.208 "raid_level": "concat", 00:11:18.208 "superblock": true, 00:11:18.208 "num_base_bdevs": 3, 00:11:18.208 "num_base_bdevs_discovered": 1, 00:11:18.208 "num_base_bdevs_operational": 3, 00:11:18.208 "base_bdevs_list": [ 00:11:18.208 { 00:11:18.208 "name": "pt1", 00:11:18.208 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:18.208 "is_configured": true, 00:11:18.208 "data_offset": 2048, 00:11:18.208 "data_size": 63488 00:11:18.208 }, 00:11:18.208 { 00:11:18.208 "name": null, 00:11:18.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.208 "is_configured": false, 00:11:18.208 "data_offset": 2048, 00:11:18.208 "data_size": 63488 00:11:18.208 }, 00:11:18.208 { 00:11:18.208 "name": null, 00:11:18.208 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:18.208 "is_configured": false, 00:11:18.208 "data_offset": 2048, 00:11:18.208 "data_size": 63488 00:11:18.208 } 00:11:18.208 ] 00:11:18.208 }' 00:11:18.208 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.208 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.468 [2024-11-27 04:28:14.927781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:18.468 [2024-11-27 04:28:14.928008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.468 [2024-11-27 04:28:14.928076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:18.468 [2024-11-27 04:28:14.928144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.468 [2024-11-27 04:28:14.928840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.468 [2024-11-27 04:28:14.928917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:18.468 [2024-11-27 04:28:14.929106] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:18.468 [2024-11-27 04:28:14.929187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:18.468 pt2 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.468 [2024-11-27 04:28:14.935837] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.468 "name": "raid_bdev1", 00:11:18.468 "uuid": "e1ae3769-be2f-494a-b872-d03f7395d81c", 00:11:18.468 "strip_size_kb": 64, 00:11:18.468 "state": "configuring", 00:11:18.468 "raid_level": "concat", 00:11:18.468 "superblock": true, 00:11:18.468 "num_base_bdevs": 3, 00:11:18.468 "num_base_bdevs_discovered": 1, 00:11:18.468 "num_base_bdevs_operational": 3, 00:11:18.468 "base_bdevs_list": [ 00:11:18.468 { 00:11:18.468 "name": "pt1", 00:11:18.468 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:18.468 "is_configured": true, 00:11:18.468 "data_offset": 2048, 00:11:18.468 "data_size": 63488 00:11:18.468 }, 00:11:18.468 { 00:11:18.468 "name": null, 00:11:18.468 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.468 "is_configured": false, 00:11:18.468 "data_offset": 0, 00:11:18.468 "data_size": 63488 00:11:18.468 }, 00:11:18.468 { 00:11:18.468 "name": null, 00:11:18.468 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:18.468 "is_configured": false, 00:11:18.468 "data_offset": 2048, 00:11:18.468 "data_size": 63488 00:11:18.468 } 00:11:18.468 ] 00:11:18.468 }' 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.468 04:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.039 [2024-11-27 04:28:15.403726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:19.039 [2024-11-27 04:28:15.403851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.039 [2024-11-27 04:28:15.403876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:19.039 [2024-11-27 04:28:15.403890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.039 [2024-11-27 04:28:15.404568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.039 [2024-11-27 04:28:15.404596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:19.039 [2024-11-27 04:28:15.404712] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:19.039 [2024-11-27 04:28:15.404745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:19.039 pt2 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.039 [2024-11-27 04:28:15.415715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:19.039 [2024-11-27 04:28:15.415895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.039 [2024-11-27 04:28:15.415941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:19.039 [2024-11-27 04:28:15.415981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.039 [2024-11-27 04:28:15.416650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.039 [2024-11-27 04:28:15.416738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:19.039 [2024-11-27 04:28:15.416882] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:19.039 [2024-11-27 04:28:15.416949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:19.039 [2024-11-27 04:28:15.417163] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:19.039 [2024-11-27 04:28:15.417212] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:19.039 [2024-11-27 04:28:15.417581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:19.039 [2024-11-27 04:28:15.417812] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:19.039 [2024-11-27 04:28:15.417857] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:19.039 [2024-11-27 04:28:15.418123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.039 pt3 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.039 "name": "raid_bdev1", 00:11:19.039 "uuid": "e1ae3769-be2f-494a-b872-d03f7395d81c", 00:11:19.039 "strip_size_kb": 64, 00:11:19.039 "state": "online", 00:11:19.039 "raid_level": "concat", 00:11:19.039 "superblock": true, 00:11:19.039 "num_base_bdevs": 3, 00:11:19.039 "num_base_bdevs_discovered": 3, 00:11:19.039 "num_base_bdevs_operational": 3, 00:11:19.039 "base_bdevs_list": [ 00:11:19.039 { 00:11:19.039 "name": "pt1", 00:11:19.039 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:19.039 "is_configured": true, 00:11:19.039 "data_offset": 2048, 00:11:19.039 "data_size": 63488 00:11:19.039 }, 00:11:19.039 { 00:11:19.039 "name": "pt2", 00:11:19.039 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:19.039 "is_configured": true, 00:11:19.039 "data_offset": 2048, 00:11:19.039 "data_size": 63488 00:11:19.039 }, 00:11:19.039 { 00:11:19.039 "name": "pt3", 00:11:19.039 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:19.039 "is_configured": true, 00:11:19.039 "data_offset": 2048, 00:11:19.039 "data_size": 63488 00:11:19.039 } 00:11:19.039 ] 00:11:19.039 }' 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.039 04:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.608 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:19.608 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:19.608 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:19.608 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:19.608 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:19.608 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:19.608 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:19.608 04:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.608 04:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.608 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:19.608 [2024-11-27 04:28:15.916179] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:19.608 04:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.608 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:19.608 "name": "raid_bdev1", 00:11:19.608 "aliases": [ 00:11:19.608 "e1ae3769-be2f-494a-b872-d03f7395d81c" 00:11:19.608 ], 00:11:19.608 "product_name": "Raid Volume", 00:11:19.608 "block_size": 512, 00:11:19.608 "num_blocks": 190464, 00:11:19.608 "uuid": "e1ae3769-be2f-494a-b872-d03f7395d81c", 00:11:19.608 "assigned_rate_limits": { 00:11:19.608 "rw_ios_per_sec": 0, 00:11:19.608 "rw_mbytes_per_sec": 0, 00:11:19.608 "r_mbytes_per_sec": 0, 00:11:19.608 "w_mbytes_per_sec": 0 00:11:19.608 }, 00:11:19.608 "claimed": false, 00:11:19.608 "zoned": false, 00:11:19.608 "supported_io_types": { 00:11:19.608 "read": true, 00:11:19.608 "write": true, 00:11:19.608 "unmap": true, 00:11:19.608 "flush": true, 00:11:19.608 "reset": true, 00:11:19.608 "nvme_admin": false, 00:11:19.608 "nvme_io": false, 00:11:19.608 "nvme_io_md": false, 00:11:19.608 "write_zeroes": true, 00:11:19.608 "zcopy": false, 00:11:19.608 "get_zone_info": false, 00:11:19.608 "zone_management": false, 00:11:19.608 "zone_append": false, 00:11:19.608 "compare": false, 00:11:19.608 "compare_and_write": false, 00:11:19.608 "abort": false, 00:11:19.608 "seek_hole": false, 00:11:19.608 "seek_data": false, 00:11:19.608 "copy": false, 00:11:19.608 "nvme_iov_md": false 00:11:19.608 }, 00:11:19.608 "memory_domains": [ 00:11:19.608 { 00:11:19.608 "dma_device_id": "system", 00:11:19.608 "dma_device_type": 1 00:11:19.608 }, 00:11:19.608 { 00:11:19.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.608 "dma_device_type": 2 00:11:19.608 }, 00:11:19.608 { 00:11:19.608 "dma_device_id": "system", 00:11:19.608 "dma_device_type": 1 00:11:19.608 }, 00:11:19.608 { 00:11:19.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.608 "dma_device_type": 2 00:11:19.608 }, 00:11:19.608 { 00:11:19.608 "dma_device_id": "system", 00:11:19.608 "dma_device_type": 1 00:11:19.608 }, 00:11:19.608 { 00:11:19.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.608 "dma_device_type": 2 00:11:19.608 } 00:11:19.608 ], 00:11:19.608 "driver_specific": { 00:11:19.608 "raid": { 00:11:19.608 "uuid": "e1ae3769-be2f-494a-b872-d03f7395d81c", 00:11:19.608 "strip_size_kb": 64, 00:11:19.608 "state": "online", 00:11:19.608 "raid_level": "concat", 00:11:19.608 "superblock": true, 00:11:19.608 "num_base_bdevs": 3, 00:11:19.608 "num_base_bdevs_discovered": 3, 00:11:19.608 "num_base_bdevs_operational": 3, 00:11:19.608 "base_bdevs_list": [ 00:11:19.608 { 00:11:19.608 "name": "pt1", 00:11:19.608 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:19.608 "is_configured": true, 00:11:19.608 "data_offset": 2048, 00:11:19.608 "data_size": 63488 00:11:19.608 }, 00:11:19.608 { 00:11:19.608 "name": "pt2", 00:11:19.608 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:19.608 "is_configured": true, 00:11:19.608 "data_offset": 2048, 00:11:19.608 "data_size": 63488 00:11:19.608 }, 00:11:19.608 { 00:11:19.608 "name": "pt3", 00:11:19.608 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:19.608 "is_configured": true, 00:11:19.608 "data_offset": 2048, 00:11:19.608 "data_size": 63488 00:11:19.608 } 00:11:19.608 ] 00:11:19.608 } 00:11:19.608 } 00:11:19.608 }' 00:11:19.608 04:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:19.608 04:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:19.608 pt2 00:11:19.608 pt3' 00:11:19.608 04:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.608 04:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:19.608 04:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.608 04:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.608 04:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:19.608 04:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.608 04:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.608 04:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.608 04:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.608 04:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.608 04:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.608 04:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:19.608 04:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.608 04:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.608 04:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.608 04:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.608 04:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.608 04:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.608 04:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.608 04:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:19.608 04:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.608 04:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.608 04:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.608 04:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.866 04:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.867 04:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.867 04:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:19.867 04:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:19.867 04:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.867 04:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.867 [2024-11-27 04:28:16.228119] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:19.867 04:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.867 04:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e1ae3769-be2f-494a-b872-d03f7395d81c '!=' e1ae3769-be2f-494a-b872-d03f7395d81c ']' 00:11:19.867 04:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:19.867 04:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:19.867 04:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:19.867 04:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67079 00:11:19.867 04:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 67079 ']' 00:11:19.867 04:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 67079 00:11:19.867 04:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:19.867 04:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.867 04:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67079 00:11:19.867 killing process with pid 67079 00:11:19.867 04:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.867 04:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.867 04:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67079' 00:11:19.867 04:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 67079 00:11:19.867 [2024-11-27 04:28:16.284451] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:19.867 04:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 67079 00:11:19.867 [2024-11-27 04:28:16.284606] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.867 [2024-11-27 04:28:16.284692] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.867 [2024-11-27 04:28:16.284708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:20.126 [2024-11-27 04:28:16.691054] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:22.028 ************************************ 00:11:22.028 END TEST raid_superblock_test 00:11:22.028 ************************************ 00:11:22.028 04:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:22.028 00:11:22.028 real 0m5.935s 00:11:22.028 user 0m8.237s 00:11:22.028 sys 0m1.001s 00:11:22.028 04:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.028 04:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.028 04:28:18 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:11:22.028 04:28:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:22.028 04:28:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.028 04:28:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:22.028 ************************************ 00:11:22.028 START TEST raid_read_error_test 00:11:22.028 ************************************ 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nKIv3QB7H0 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67343 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67343 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67343 ']' 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.028 04:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.028 [2024-11-27 04:28:18.354303] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:22.028 [2024-11-27 04:28:18.354444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67343 ] 00:11:22.028 [2024-11-27 04:28:18.533385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.286 [2024-11-27 04:28:18.701129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.544 [2024-11-27 04:28:18.992606] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.544 [2024-11-27 04:28:18.992792] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.802 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.802 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:22.802 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:22.802 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:22.802 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.802 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.802 BaseBdev1_malloc 00:11:22.802 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.802 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:22.802 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.802 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.802 true 00:11:22.802 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.802 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:22.802 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.803 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.803 [2024-11-27 04:28:19.374057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:22.803 [2024-11-27 04:28:19.374230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.803 [2024-11-27 04:28:19.374281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:22.803 [2024-11-27 04:28:19.374304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.803 [2024-11-27 04:28:19.378312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.803 [2024-11-27 04:28:19.378425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:22.803 BaseBdev1 00:11:22.803 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.803 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:22.803 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:22.803 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.803 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.062 BaseBdev2_malloc 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.062 true 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.062 [2024-11-27 04:28:19.460269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:23.062 [2024-11-27 04:28:19.460416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.062 [2024-11-27 04:28:19.460462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:23.062 [2024-11-27 04:28:19.460485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.062 [2024-11-27 04:28:19.464765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.062 [2024-11-27 04:28:19.464886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:23.062 BaseBdev2 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.062 BaseBdev3_malloc 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.062 true 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.062 [2024-11-27 04:28:19.555055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:23.062 [2024-11-27 04:28:19.555324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.062 [2024-11-27 04:28:19.555363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:23.062 [2024-11-27 04:28:19.555379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.062 [2024-11-27 04:28:19.558749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.062 [2024-11-27 04:28:19.558935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:23.062 BaseBdev3 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.062 [2024-11-27 04:28:19.567370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:23.062 [2024-11-27 04:28:19.570211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.062 [2024-11-27 04:28:19.570339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:23.062 [2024-11-27 04:28:19.570636] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:23.062 [2024-11-27 04:28:19.570652] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:23.062 [2024-11-27 04:28:19.571064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:23.062 [2024-11-27 04:28:19.571332] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:23.062 [2024-11-27 04:28:19.571359] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:23.062 [2024-11-27 04:28:19.571759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.062 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.062 "name": "raid_bdev1", 00:11:23.062 "uuid": "4882665e-72a3-4af3-bc75-bec1b154ef2c", 00:11:23.062 "strip_size_kb": 64, 00:11:23.062 "state": "online", 00:11:23.062 "raid_level": "concat", 00:11:23.062 "superblock": true, 00:11:23.062 "num_base_bdevs": 3, 00:11:23.062 "num_base_bdevs_discovered": 3, 00:11:23.062 "num_base_bdevs_operational": 3, 00:11:23.062 "base_bdevs_list": [ 00:11:23.062 { 00:11:23.062 "name": "BaseBdev1", 00:11:23.062 "uuid": "2f271c43-3760-5a6b-bcf8-9efd9ec25c4b", 00:11:23.063 "is_configured": true, 00:11:23.063 "data_offset": 2048, 00:11:23.063 "data_size": 63488 00:11:23.063 }, 00:11:23.063 { 00:11:23.063 "name": "BaseBdev2", 00:11:23.063 "uuid": "1c3c486e-76ed-5bdf-a966-848f004217da", 00:11:23.063 "is_configured": true, 00:11:23.063 "data_offset": 2048, 00:11:23.063 "data_size": 63488 00:11:23.063 }, 00:11:23.063 { 00:11:23.063 "name": "BaseBdev3", 00:11:23.063 "uuid": "81de47fd-b108-5289-8980-056cf844fdaf", 00:11:23.063 "is_configured": true, 00:11:23.063 "data_offset": 2048, 00:11:23.063 "data_size": 63488 00:11:23.063 } 00:11:23.063 ] 00:11:23.063 }' 00:11:23.063 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.063 04:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.629 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:23.629 04:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:23.629 [2024-11-27 04:28:20.092733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:24.643 04:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:24.643 04:28:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.643 04:28:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.643 04:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.643 04:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:24.643 04:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:24.643 04:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:24.643 04:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:24.643 04:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.643 04:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.643 04:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.643 04:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.643 04:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.643 04:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.643 04:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.643 04:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.643 04:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.643 04:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.643 04:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.643 04:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.643 04:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.643 04:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.643 04:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.643 "name": "raid_bdev1", 00:11:24.643 "uuid": "4882665e-72a3-4af3-bc75-bec1b154ef2c", 00:11:24.643 "strip_size_kb": 64, 00:11:24.643 "state": "online", 00:11:24.643 "raid_level": "concat", 00:11:24.643 "superblock": true, 00:11:24.643 "num_base_bdevs": 3, 00:11:24.643 "num_base_bdevs_discovered": 3, 00:11:24.643 "num_base_bdevs_operational": 3, 00:11:24.643 "base_bdevs_list": [ 00:11:24.643 { 00:11:24.643 "name": "BaseBdev1", 00:11:24.643 "uuid": "2f271c43-3760-5a6b-bcf8-9efd9ec25c4b", 00:11:24.643 "is_configured": true, 00:11:24.643 "data_offset": 2048, 00:11:24.643 "data_size": 63488 00:11:24.643 }, 00:11:24.643 { 00:11:24.643 "name": "BaseBdev2", 00:11:24.643 "uuid": "1c3c486e-76ed-5bdf-a966-848f004217da", 00:11:24.643 "is_configured": true, 00:11:24.643 "data_offset": 2048, 00:11:24.643 "data_size": 63488 00:11:24.643 }, 00:11:24.643 { 00:11:24.643 "name": "BaseBdev3", 00:11:24.643 "uuid": "81de47fd-b108-5289-8980-056cf844fdaf", 00:11:24.643 "is_configured": true, 00:11:24.643 "data_offset": 2048, 00:11:24.643 "data_size": 63488 00:11:24.643 } 00:11:24.643 ] 00:11:24.643 }' 00:11:24.643 04:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.643 04:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.210 04:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:25.210 04:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.210 04:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.210 [2024-11-27 04:28:21.548901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:25.210 [2024-11-27 04:28:21.548964] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:25.210 [2024-11-27 04:28:21.552693] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.210 [2024-11-27 04:28:21.552933] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.210 [2024-11-27 04:28:21.553030] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:25.210 [2024-11-27 04:28:21.553106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:25.210 { 00:11:25.210 "results": [ 00:11:25.210 { 00:11:25.210 "job": "raid_bdev1", 00:11:25.210 "core_mask": "0x1", 00:11:25.210 "workload": "randrw", 00:11:25.210 "percentage": 50, 00:11:25.210 "status": "finished", 00:11:25.210 "queue_depth": 1, 00:11:25.210 "io_size": 131072, 00:11:25.210 "runtime": 1.456267, 00:11:25.210 "iops": 10583.224092834624, 00:11:25.210 "mibps": 1322.903011604328, 00:11:25.210 "io_failed": 1, 00:11:25.210 "io_timeout": 0, 00:11:25.210 "avg_latency_us": 132.75456226057682, 00:11:25.210 "min_latency_us": 33.98427947598253, 00:11:25.210 "max_latency_us": 1802.955458515284 00:11:25.210 } 00:11:25.210 ], 00:11:25.210 "core_count": 1 00:11:25.210 } 00:11:25.210 04:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.210 04:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67343 00:11:25.210 04:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67343 ']' 00:11:25.210 04:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67343 00:11:25.210 04:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:25.210 04:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.210 04:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67343 00:11:25.210 killing process with pid 67343 00:11:25.210 04:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:25.210 04:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:25.210 04:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67343' 00:11:25.210 04:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67343 00:11:25.210 [2024-11-27 04:28:21.599515] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:25.210 04:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67343 00:11:25.469 [2024-11-27 04:28:21.913815] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:27.372 04:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nKIv3QB7H0 00:11:27.372 04:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:27.372 04:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:27.372 04:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:11:27.372 04:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:27.372 04:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:27.372 04:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:27.372 04:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:11:27.372 00:11:27.372 real 0m5.259s 00:11:27.372 user 0m6.145s 00:11:27.372 sys 0m0.672s 00:11:27.372 04:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.372 ************************************ 00:11:27.372 END TEST raid_read_error_test 00:11:27.372 ************************************ 00:11:27.372 04:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.372 04:28:23 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:11:27.372 04:28:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:27.372 04:28:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.372 04:28:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:27.372 ************************************ 00:11:27.372 START TEST raid_write_error_test 00:11:27.372 ************************************ 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Iu4yh7a99p 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67493 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67493 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67493 ']' 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.372 04:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.372 [2024-11-27 04:28:23.656748] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:27.372 [2024-11-27 04:28:23.656971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67493 ] 00:11:27.372 [2024-11-27 04:28:23.839113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.632 [2024-11-27 04:28:24.010441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.891 [2024-11-27 04:28:24.308168] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.891 [2024-11-27 04:28:24.308421] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.149 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.149 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:28.149 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:28.149 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:28.149 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.149 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.149 BaseBdev1_malloc 00:11:28.149 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.149 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:28.149 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.149 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.149 true 00:11:28.149 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.149 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:28.149 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.149 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.149 [2024-11-27 04:28:24.688988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:28.149 [2024-11-27 04:28:24.689202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.149 [2024-11-27 04:28:24.689268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:28.149 [2024-11-27 04:28:24.689309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.149 [2024-11-27 04:28:24.692557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.149 [2024-11-27 04:28:24.692718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:28.149 BaseBdev1 00:11:28.149 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.149 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:28.149 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:28.149 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.149 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.409 BaseBdev2_malloc 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.409 true 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.409 [2024-11-27 04:28:24.768199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:28.409 [2024-11-27 04:28:24.768423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.409 [2024-11-27 04:28:24.768458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:28.409 [2024-11-27 04:28:24.768473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.409 [2024-11-27 04:28:24.771691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.409 [2024-11-27 04:28:24.771773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:28.409 BaseBdev2 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.409 BaseBdev3_malloc 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.409 true 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.409 [2024-11-27 04:28:24.854814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:28.409 [2024-11-27 04:28:24.855011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.409 [2024-11-27 04:28:24.855072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:28.409 [2024-11-27 04:28:24.855127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.409 [2024-11-27 04:28:24.858378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.409 [2024-11-27 04:28:24.858539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:28.409 BaseBdev3 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.409 [2024-11-27 04:28:24.866983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.409 [2024-11-27 04:28:24.869862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:28.409 [2024-11-27 04:28:24.869993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.409 [2024-11-27 04:28:24.870323] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:28.409 [2024-11-27 04:28:24.870341] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:28.409 [2024-11-27 04:28:24.870760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:28.409 [2024-11-27 04:28:24.871010] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:28.409 [2024-11-27 04:28:24.871028] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:28.409 [2024-11-27 04:28:24.871400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.409 "name": "raid_bdev1", 00:11:28.409 "uuid": "4fb80d93-5374-424d-8cf2-0d5299501d0d", 00:11:28.409 "strip_size_kb": 64, 00:11:28.409 "state": "online", 00:11:28.409 "raid_level": "concat", 00:11:28.409 "superblock": true, 00:11:28.409 "num_base_bdevs": 3, 00:11:28.409 "num_base_bdevs_discovered": 3, 00:11:28.409 "num_base_bdevs_operational": 3, 00:11:28.409 "base_bdevs_list": [ 00:11:28.409 { 00:11:28.409 "name": "BaseBdev1", 00:11:28.409 "uuid": "b3674691-903f-523d-b7d7-a41d154cf4f8", 00:11:28.409 "is_configured": true, 00:11:28.409 "data_offset": 2048, 00:11:28.409 "data_size": 63488 00:11:28.409 }, 00:11:28.409 { 00:11:28.409 "name": "BaseBdev2", 00:11:28.409 "uuid": "9b7bd91f-2960-5758-99cf-c986eb3d796c", 00:11:28.409 "is_configured": true, 00:11:28.409 "data_offset": 2048, 00:11:28.409 "data_size": 63488 00:11:28.409 }, 00:11:28.409 { 00:11:28.409 "name": "BaseBdev3", 00:11:28.409 "uuid": "ff4127ee-96e2-5df5-aa83-0ac5db39ccd7", 00:11:28.409 "is_configured": true, 00:11:28.409 "data_offset": 2048, 00:11:28.409 "data_size": 63488 00:11:28.409 } 00:11:28.409 ] 00:11:28.409 }' 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.409 04:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.976 04:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:28.976 04:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:28.976 [2024-11-27 04:28:25.428194] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:29.911 04:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:29.911 04:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.911 04:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.911 04:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.911 04:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:29.911 04:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:29.911 04:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:29.911 04:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:29.911 04:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.911 04:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.911 04:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.911 04:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.911 04:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.911 04:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.911 04:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.911 04:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.911 04:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.911 04:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.911 04:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.911 04:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.911 04:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.911 04:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.911 04:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.911 "name": "raid_bdev1", 00:11:29.911 "uuid": "4fb80d93-5374-424d-8cf2-0d5299501d0d", 00:11:29.911 "strip_size_kb": 64, 00:11:29.911 "state": "online", 00:11:29.911 "raid_level": "concat", 00:11:29.911 "superblock": true, 00:11:29.911 "num_base_bdevs": 3, 00:11:29.911 "num_base_bdevs_discovered": 3, 00:11:29.911 "num_base_bdevs_operational": 3, 00:11:29.911 "base_bdevs_list": [ 00:11:29.911 { 00:11:29.911 "name": "BaseBdev1", 00:11:29.911 "uuid": "b3674691-903f-523d-b7d7-a41d154cf4f8", 00:11:29.911 "is_configured": true, 00:11:29.911 "data_offset": 2048, 00:11:29.911 "data_size": 63488 00:11:29.911 }, 00:11:29.911 { 00:11:29.911 "name": "BaseBdev2", 00:11:29.911 "uuid": "9b7bd91f-2960-5758-99cf-c986eb3d796c", 00:11:29.911 "is_configured": true, 00:11:29.911 "data_offset": 2048, 00:11:29.911 "data_size": 63488 00:11:29.911 }, 00:11:29.911 { 00:11:29.911 "name": "BaseBdev3", 00:11:29.911 "uuid": "ff4127ee-96e2-5df5-aa83-0ac5db39ccd7", 00:11:29.911 "is_configured": true, 00:11:29.911 "data_offset": 2048, 00:11:29.911 "data_size": 63488 00:11:29.911 } 00:11:29.911 ] 00:11:29.911 }' 00:11:29.911 04:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.911 04:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.547 04:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:30.547 04:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.547 04:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.547 [2024-11-27 04:28:26.795615] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:30.547 [2024-11-27 04:28:26.795792] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:30.547 [2024-11-27 04:28:26.798861] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:30.547 [2024-11-27 04:28:26.799014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.547 [2024-11-27 04:28:26.799115] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:30.547 [2024-11-27 04:28:26.799169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:30.547 { 00:11:30.547 "results": [ 00:11:30.547 { 00:11:30.547 "job": "raid_bdev1", 00:11:30.547 "core_mask": "0x1", 00:11:30.547 "workload": "randrw", 00:11:30.547 "percentage": 50, 00:11:30.547 "status": "finished", 00:11:30.547 "queue_depth": 1, 00:11:30.547 "io_size": 131072, 00:11:30.547 "runtime": 1.367602, 00:11:30.547 "iops": 10985.652258478709, 00:11:30.547 "mibps": 1373.2065323098386, 00:11:30.547 "io_failed": 1, 00:11:30.547 "io_timeout": 0, 00:11:30.547 "avg_latency_us": 127.63250163846281, 00:11:30.547 "min_latency_us": 31.524890829694325, 00:11:30.547 "max_latency_us": 1788.646288209607 00:11:30.547 } 00:11:30.547 ], 00:11:30.547 "core_count": 1 00:11:30.547 } 00:11:30.547 04:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.547 04:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67493 00:11:30.547 04:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67493 ']' 00:11:30.547 04:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67493 00:11:30.547 04:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:30.547 04:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:30.547 04:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67493 00:11:30.547 04:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:30.547 04:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:30.547 04:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67493' 00:11:30.547 killing process with pid 67493 00:11:30.547 04:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67493 00:11:30.547 [2024-11-27 04:28:26.846573] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:30.547 04:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67493 00:11:30.861 [2024-11-27 04:28:27.150246] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:32.237 04:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Iu4yh7a99p 00:11:32.237 04:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:32.237 04:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:32.237 ************************************ 00:11:32.237 END TEST raid_write_error_test 00:11:32.237 ************************************ 00:11:32.237 04:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:32.237 04:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:32.237 04:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:32.237 04:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:32.237 04:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:32.237 00:11:32.237 real 0m5.138s 00:11:32.237 user 0m5.994s 00:11:32.237 sys 0m0.687s 00:11:32.237 04:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.237 04:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.237 04:28:28 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:32.237 04:28:28 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:11:32.237 04:28:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:32.237 04:28:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.237 04:28:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:32.237 ************************************ 00:11:32.237 START TEST raid_state_function_test 00:11:32.237 ************************************ 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67638 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67638' 00:11:32.237 Process raid pid: 67638 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67638 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67638 ']' 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.237 04:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.238 04:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.238 04:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.496 [2024-11-27 04:28:28.862173] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:32.496 [2024-11-27 04:28:28.862429] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.496 [2024-11-27 04:28:29.043739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.754 [2024-11-27 04:28:29.207670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.011 [2024-11-27 04:28:29.496432] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.011 [2024-11-27 04:28:29.496625] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.270 04:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.270 04:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:33.270 04:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:33.270 04:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.270 04:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.270 [2024-11-27 04:28:29.828824] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:33.270 [2024-11-27 04:28:29.829016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:33.270 [2024-11-27 04:28:29.829035] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:33.270 [2024-11-27 04:28:29.829048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:33.270 [2024-11-27 04:28:29.829056] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:33.270 [2024-11-27 04:28:29.829068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:33.270 04:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.271 04:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:33.271 04:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.271 04:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.271 04:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.271 04:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.271 04:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:33.271 04:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.271 04:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.271 04:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.271 04:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.271 04:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.271 04:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.271 04:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.271 04:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.271 04:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.529 04:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.529 "name": "Existed_Raid", 00:11:33.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.529 "strip_size_kb": 0, 00:11:33.529 "state": "configuring", 00:11:33.529 "raid_level": "raid1", 00:11:33.529 "superblock": false, 00:11:33.529 "num_base_bdevs": 3, 00:11:33.529 "num_base_bdevs_discovered": 0, 00:11:33.529 "num_base_bdevs_operational": 3, 00:11:33.529 "base_bdevs_list": [ 00:11:33.529 { 00:11:33.529 "name": "BaseBdev1", 00:11:33.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.529 "is_configured": false, 00:11:33.529 "data_offset": 0, 00:11:33.529 "data_size": 0 00:11:33.529 }, 00:11:33.529 { 00:11:33.529 "name": "BaseBdev2", 00:11:33.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.529 "is_configured": false, 00:11:33.529 "data_offset": 0, 00:11:33.529 "data_size": 0 00:11:33.529 }, 00:11:33.529 { 00:11:33.529 "name": "BaseBdev3", 00:11:33.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.529 "is_configured": false, 00:11:33.529 "data_offset": 0, 00:11:33.529 "data_size": 0 00:11:33.529 } 00:11:33.529 ] 00:11:33.529 }' 00:11:33.529 04:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.529 04:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.788 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:33.788 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.788 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.788 [2024-11-27 04:28:30.260218] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:33.788 [2024-11-27 04:28:30.260288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:33.788 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.788 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:33.788 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.788 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.788 [2024-11-27 04:28:30.272215] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:33.788 [2024-11-27 04:28:30.272306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:33.788 [2024-11-27 04:28:30.272318] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:33.788 [2024-11-27 04:28:30.272330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:33.788 [2024-11-27 04:28:30.272338] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:33.788 [2024-11-27 04:28:30.272349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:33.788 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.788 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:33.788 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.788 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.788 [2024-11-27 04:28:30.331574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:33.788 BaseBdev1 00:11:33.788 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.788 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:33.788 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:33.788 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:33.788 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:33.788 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:33.788 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:33.788 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:33.788 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.788 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.788 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.788 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:33.788 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.788 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.788 [ 00:11:33.788 { 00:11:33.788 "name": "BaseBdev1", 00:11:33.788 "aliases": [ 00:11:33.788 "39d80f97-afeb-4424-8159-66b2ba9a5e99" 00:11:33.788 ], 00:11:33.788 "product_name": "Malloc disk", 00:11:33.788 "block_size": 512, 00:11:33.788 "num_blocks": 65536, 00:11:33.788 "uuid": "39d80f97-afeb-4424-8159-66b2ba9a5e99", 00:11:33.788 "assigned_rate_limits": { 00:11:33.788 "rw_ios_per_sec": 0, 00:11:33.788 "rw_mbytes_per_sec": 0, 00:11:33.788 "r_mbytes_per_sec": 0, 00:11:33.788 "w_mbytes_per_sec": 0 00:11:33.788 }, 00:11:33.788 "claimed": true, 00:11:33.789 "claim_type": "exclusive_write", 00:11:33.789 "zoned": false, 00:11:33.789 "supported_io_types": { 00:11:33.789 "read": true, 00:11:33.789 "write": true, 00:11:33.789 "unmap": true, 00:11:33.789 "flush": true, 00:11:33.789 "reset": true, 00:11:33.789 "nvme_admin": false, 00:11:33.789 "nvme_io": false, 00:11:33.789 "nvme_io_md": false, 00:11:33.789 "write_zeroes": true, 00:11:33.789 "zcopy": true, 00:11:33.789 "get_zone_info": false, 00:11:33.789 "zone_management": false, 00:11:33.789 "zone_append": false, 00:11:33.789 "compare": false, 00:11:33.789 "compare_and_write": false, 00:11:33.789 "abort": true, 00:11:33.789 "seek_hole": false, 00:11:33.789 "seek_data": false, 00:11:33.789 "copy": true, 00:11:33.789 "nvme_iov_md": false 00:11:33.789 }, 00:11:33.789 "memory_domains": [ 00:11:34.047 { 00:11:34.047 "dma_device_id": "system", 00:11:34.047 "dma_device_type": 1 00:11:34.047 }, 00:11:34.047 { 00:11:34.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.047 "dma_device_type": 2 00:11:34.047 } 00:11:34.047 ], 00:11:34.047 "driver_specific": {} 00:11:34.047 } 00:11:34.047 ] 00:11:34.047 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.047 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:34.047 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:34.047 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.047 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.047 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.047 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.047 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.047 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.047 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.047 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.047 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.047 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.047 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.047 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.047 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.047 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.047 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.047 "name": "Existed_Raid", 00:11:34.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.047 "strip_size_kb": 0, 00:11:34.047 "state": "configuring", 00:11:34.047 "raid_level": "raid1", 00:11:34.047 "superblock": false, 00:11:34.047 "num_base_bdevs": 3, 00:11:34.047 "num_base_bdevs_discovered": 1, 00:11:34.047 "num_base_bdevs_operational": 3, 00:11:34.047 "base_bdevs_list": [ 00:11:34.047 { 00:11:34.047 "name": "BaseBdev1", 00:11:34.047 "uuid": "39d80f97-afeb-4424-8159-66b2ba9a5e99", 00:11:34.047 "is_configured": true, 00:11:34.047 "data_offset": 0, 00:11:34.047 "data_size": 65536 00:11:34.047 }, 00:11:34.047 { 00:11:34.047 "name": "BaseBdev2", 00:11:34.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.047 "is_configured": false, 00:11:34.047 "data_offset": 0, 00:11:34.047 "data_size": 0 00:11:34.047 }, 00:11:34.047 { 00:11:34.047 "name": "BaseBdev3", 00:11:34.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.047 "is_configured": false, 00:11:34.047 "data_offset": 0, 00:11:34.047 "data_size": 0 00:11:34.047 } 00:11:34.047 ] 00:11:34.047 }' 00:11:34.047 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.047 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.305 [2024-11-27 04:28:30.822873] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:34.305 [2024-11-27 04:28:30.823073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.305 [2024-11-27 04:28:30.830918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.305 [2024-11-27 04:28:30.833476] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.305 [2024-11-27 04:28:30.833644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.305 [2024-11-27 04:28:30.833663] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:34.305 [2024-11-27 04:28:30.833677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.305 "name": "Existed_Raid", 00:11:34.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.305 "strip_size_kb": 0, 00:11:34.305 "state": "configuring", 00:11:34.305 "raid_level": "raid1", 00:11:34.305 "superblock": false, 00:11:34.305 "num_base_bdevs": 3, 00:11:34.305 "num_base_bdevs_discovered": 1, 00:11:34.305 "num_base_bdevs_operational": 3, 00:11:34.305 "base_bdevs_list": [ 00:11:34.305 { 00:11:34.305 "name": "BaseBdev1", 00:11:34.305 "uuid": "39d80f97-afeb-4424-8159-66b2ba9a5e99", 00:11:34.305 "is_configured": true, 00:11:34.305 "data_offset": 0, 00:11:34.305 "data_size": 65536 00:11:34.305 }, 00:11:34.305 { 00:11:34.305 "name": "BaseBdev2", 00:11:34.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.305 "is_configured": false, 00:11:34.305 "data_offset": 0, 00:11:34.305 "data_size": 0 00:11:34.305 }, 00:11:34.305 { 00:11:34.305 "name": "BaseBdev3", 00:11:34.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.305 "is_configured": false, 00:11:34.305 "data_offset": 0, 00:11:34.305 "data_size": 0 00:11:34.305 } 00:11:34.305 ] 00:11:34.305 }' 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.305 04:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.872 [2024-11-27 04:28:31.342002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:34.872 BaseBdev2 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.872 [ 00:11:34.872 { 00:11:34.872 "name": "BaseBdev2", 00:11:34.872 "aliases": [ 00:11:34.872 "125cfb58-880f-43f3-a21a-72705beb979b" 00:11:34.872 ], 00:11:34.872 "product_name": "Malloc disk", 00:11:34.872 "block_size": 512, 00:11:34.872 "num_blocks": 65536, 00:11:34.872 "uuid": "125cfb58-880f-43f3-a21a-72705beb979b", 00:11:34.872 "assigned_rate_limits": { 00:11:34.872 "rw_ios_per_sec": 0, 00:11:34.872 "rw_mbytes_per_sec": 0, 00:11:34.872 "r_mbytes_per_sec": 0, 00:11:34.872 "w_mbytes_per_sec": 0 00:11:34.872 }, 00:11:34.872 "claimed": true, 00:11:34.872 "claim_type": "exclusive_write", 00:11:34.872 "zoned": false, 00:11:34.872 "supported_io_types": { 00:11:34.872 "read": true, 00:11:34.872 "write": true, 00:11:34.872 "unmap": true, 00:11:34.872 "flush": true, 00:11:34.872 "reset": true, 00:11:34.872 "nvme_admin": false, 00:11:34.872 "nvme_io": false, 00:11:34.872 "nvme_io_md": false, 00:11:34.872 "write_zeroes": true, 00:11:34.872 "zcopy": true, 00:11:34.872 "get_zone_info": false, 00:11:34.872 "zone_management": false, 00:11:34.872 "zone_append": false, 00:11:34.872 "compare": false, 00:11:34.872 "compare_and_write": false, 00:11:34.872 "abort": true, 00:11:34.872 "seek_hole": false, 00:11:34.872 "seek_data": false, 00:11:34.872 "copy": true, 00:11:34.872 "nvme_iov_md": false 00:11:34.872 }, 00:11:34.872 "memory_domains": [ 00:11:34.872 { 00:11:34.872 "dma_device_id": "system", 00:11:34.872 "dma_device_type": 1 00:11:34.872 }, 00:11:34.872 { 00:11:34.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.872 "dma_device_type": 2 00:11:34.872 } 00:11:34.872 ], 00:11:34.872 "driver_specific": {} 00:11:34.872 } 00:11:34.872 ] 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.872 "name": "Existed_Raid", 00:11:34.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.872 "strip_size_kb": 0, 00:11:34.872 "state": "configuring", 00:11:34.872 "raid_level": "raid1", 00:11:34.872 "superblock": false, 00:11:34.872 "num_base_bdevs": 3, 00:11:34.872 "num_base_bdevs_discovered": 2, 00:11:34.872 "num_base_bdevs_operational": 3, 00:11:34.872 "base_bdevs_list": [ 00:11:34.872 { 00:11:34.872 "name": "BaseBdev1", 00:11:34.872 "uuid": "39d80f97-afeb-4424-8159-66b2ba9a5e99", 00:11:34.872 "is_configured": true, 00:11:34.872 "data_offset": 0, 00:11:34.872 "data_size": 65536 00:11:34.872 }, 00:11:34.872 { 00:11:34.872 "name": "BaseBdev2", 00:11:34.872 "uuid": "125cfb58-880f-43f3-a21a-72705beb979b", 00:11:34.872 "is_configured": true, 00:11:34.872 "data_offset": 0, 00:11:34.872 "data_size": 65536 00:11:34.872 }, 00:11:34.872 { 00:11:34.872 "name": "BaseBdev3", 00:11:34.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.872 "is_configured": false, 00:11:34.872 "data_offset": 0, 00:11:34.872 "data_size": 0 00:11:34.872 } 00:11:34.872 ] 00:11:34.872 }' 00:11:34.872 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.873 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.441 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:35.441 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.441 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.441 [2024-11-27 04:28:31.945796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:35.441 [2024-11-27 04:28:31.945904] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:35.441 [2024-11-27 04:28:31.945930] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:35.441 [2024-11-27 04:28:31.946646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:35.441 [2024-11-27 04:28:31.946975] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:35.441 [2024-11-27 04:28:31.946994] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:35.441 [2024-11-27 04:28:31.947405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.441 BaseBdev3 00:11:35.441 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.441 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:35.441 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:35.441 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.441 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:35.441 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.441 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.441 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.441 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.441 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.441 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.441 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:35.441 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.441 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.441 [ 00:11:35.441 { 00:11:35.441 "name": "BaseBdev3", 00:11:35.441 "aliases": [ 00:11:35.441 "b9ecec3f-922b-4c38-9b6d-c1c563791b22" 00:11:35.441 ], 00:11:35.441 "product_name": "Malloc disk", 00:11:35.441 "block_size": 512, 00:11:35.441 "num_blocks": 65536, 00:11:35.441 "uuid": "b9ecec3f-922b-4c38-9b6d-c1c563791b22", 00:11:35.441 "assigned_rate_limits": { 00:11:35.441 "rw_ios_per_sec": 0, 00:11:35.441 "rw_mbytes_per_sec": 0, 00:11:35.441 "r_mbytes_per_sec": 0, 00:11:35.441 "w_mbytes_per_sec": 0 00:11:35.441 }, 00:11:35.441 "claimed": true, 00:11:35.441 "claim_type": "exclusive_write", 00:11:35.441 "zoned": false, 00:11:35.441 "supported_io_types": { 00:11:35.441 "read": true, 00:11:35.441 "write": true, 00:11:35.441 "unmap": true, 00:11:35.441 "flush": true, 00:11:35.441 "reset": true, 00:11:35.441 "nvme_admin": false, 00:11:35.441 "nvme_io": false, 00:11:35.441 "nvme_io_md": false, 00:11:35.441 "write_zeroes": true, 00:11:35.441 "zcopy": true, 00:11:35.441 "get_zone_info": false, 00:11:35.441 "zone_management": false, 00:11:35.441 "zone_append": false, 00:11:35.441 "compare": false, 00:11:35.441 "compare_and_write": false, 00:11:35.441 "abort": true, 00:11:35.441 "seek_hole": false, 00:11:35.441 "seek_data": false, 00:11:35.441 "copy": true, 00:11:35.441 "nvme_iov_md": false 00:11:35.441 }, 00:11:35.441 "memory_domains": [ 00:11:35.441 { 00:11:35.441 "dma_device_id": "system", 00:11:35.441 "dma_device_type": 1 00:11:35.441 }, 00:11:35.441 { 00:11:35.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.441 "dma_device_type": 2 00:11:35.441 } 00:11:35.441 ], 00:11:35.441 "driver_specific": {} 00:11:35.441 } 00:11:35.441 ] 00:11:35.442 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.442 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:35.442 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:35.442 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.442 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:35.442 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.442 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.442 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.442 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.442 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.442 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.442 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.442 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.442 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.442 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.442 04:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.442 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.442 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.442 04:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.442 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.442 "name": "Existed_Raid", 00:11:35.442 "uuid": "e5c9a306-8ebd-447a-a9b5-4ae5d727d70a", 00:11:35.442 "strip_size_kb": 0, 00:11:35.442 "state": "online", 00:11:35.442 "raid_level": "raid1", 00:11:35.442 "superblock": false, 00:11:35.442 "num_base_bdevs": 3, 00:11:35.442 "num_base_bdevs_discovered": 3, 00:11:35.442 "num_base_bdevs_operational": 3, 00:11:35.442 "base_bdevs_list": [ 00:11:35.442 { 00:11:35.442 "name": "BaseBdev1", 00:11:35.442 "uuid": "39d80f97-afeb-4424-8159-66b2ba9a5e99", 00:11:35.442 "is_configured": true, 00:11:35.442 "data_offset": 0, 00:11:35.442 "data_size": 65536 00:11:35.442 }, 00:11:35.442 { 00:11:35.442 "name": "BaseBdev2", 00:11:35.442 "uuid": "125cfb58-880f-43f3-a21a-72705beb979b", 00:11:35.442 "is_configured": true, 00:11:35.442 "data_offset": 0, 00:11:35.442 "data_size": 65536 00:11:35.442 }, 00:11:35.442 { 00:11:35.442 "name": "BaseBdev3", 00:11:35.442 "uuid": "b9ecec3f-922b-4c38-9b6d-c1c563791b22", 00:11:35.442 "is_configured": true, 00:11:35.442 "data_offset": 0, 00:11:35.442 "data_size": 65536 00:11:35.442 } 00:11:35.442 ] 00:11:35.442 }' 00:11:35.442 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.442 04:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.012 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:36.012 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:36.012 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:36.012 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:36.012 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:36.012 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:36.012 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:36.012 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:36.012 04:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.012 04:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.012 [2024-11-27 04:28:32.425522] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.012 04:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.012 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.012 "name": "Existed_Raid", 00:11:36.012 "aliases": [ 00:11:36.012 "e5c9a306-8ebd-447a-a9b5-4ae5d727d70a" 00:11:36.012 ], 00:11:36.012 "product_name": "Raid Volume", 00:11:36.012 "block_size": 512, 00:11:36.012 "num_blocks": 65536, 00:11:36.012 "uuid": "e5c9a306-8ebd-447a-a9b5-4ae5d727d70a", 00:11:36.012 "assigned_rate_limits": { 00:11:36.012 "rw_ios_per_sec": 0, 00:11:36.012 "rw_mbytes_per_sec": 0, 00:11:36.012 "r_mbytes_per_sec": 0, 00:11:36.012 "w_mbytes_per_sec": 0 00:11:36.012 }, 00:11:36.012 "claimed": false, 00:11:36.012 "zoned": false, 00:11:36.012 "supported_io_types": { 00:11:36.012 "read": true, 00:11:36.012 "write": true, 00:11:36.012 "unmap": false, 00:11:36.012 "flush": false, 00:11:36.012 "reset": true, 00:11:36.012 "nvme_admin": false, 00:11:36.012 "nvme_io": false, 00:11:36.012 "nvme_io_md": false, 00:11:36.012 "write_zeroes": true, 00:11:36.012 "zcopy": false, 00:11:36.012 "get_zone_info": false, 00:11:36.012 "zone_management": false, 00:11:36.012 "zone_append": false, 00:11:36.012 "compare": false, 00:11:36.012 "compare_and_write": false, 00:11:36.012 "abort": false, 00:11:36.012 "seek_hole": false, 00:11:36.012 "seek_data": false, 00:11:36.012 "copy": false, 00:11:36.012 "nvme_iov_md": false 00:11:36.012 }, 00:11:36.012 "memory_domains": [ 00:11:36.012 { 00:11:36.012 "dma_device_id": "system", 00:11:36.012 "dma_device_type": 1 00:11:36.012 }, 00:11:36.012 { 00:11:36.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.012 "dma_device_type": 2 00:11:36.012 }, 00:11:36.012 { 00:11:36.012 "dma_device_id": "system", 00:11:36.012 "dma_device_type": 1 00:11:36.012 }, 00:11:36.012 { 00:11:36.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.012 "dma_device_type": 2 00:11:36.012 }, 00:11:36.012 { 00:11:36.012 "dma_device_id": "system", 00:11:36.012 "dma_device_type": 1 00:11:36.012 }, 00:11:36.012 { 00:11:36.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.012 "dma_device_type": 2 00:11:36.012 } 00:11:36.012 ], 00:11:36.012 "driver_specific": { 00:11:36.012 "raid": { 00:11:36.012 "uuid": "e5c9a306-8ebd-447a-a9b5-4ae5d727d70a", 00:11:36.012 "strip_size_kb": 0, 00:11:36.012 "state": "online", 00:11:36.012 "raid_level": "raid1", 00:11:36.012 "superblock": false, 00:11:36.012 "num_base_bdevs": 3, 00:11:36.012 "num_base_bdevs_discovered": 3, 00:11:36.012 "num_base_bdevs_operational": 3, 00:11:36.012 "base_bdevs_list": [ 00:11:36.012 { 00:11:36.012 "name": "BaseBdev1", 00:11:36.012 "uuid": "39d80f97-afeb-4424-8159-66b2ba9a5e99", 00:11:36.012 "is_configured": true, 00:11:36.012 "data_offset": 0, 00:11:36.013 "data_size": 65536 00:11:36.013 }, 00:11:36.013 { 00:11:36.013 "name": "BaseBdev2", 00:11:36.013 "uuid": "125cfb58-880f-43f3-a21a-72705beb979b", 00:11:36.013 "is_configured": true, 00:11:36.013 "data_offset": 0, 00:11:36.013 "data_size": 65536 00:11:36.013 }, 00:11:36.013 { 00:11:36.013 "name": "BaseBdev3", 00:11:36.013 "uuid": "b9ecec3f-922b-4c38-9b6d-c1c563791b22", 00:11:36.013 "is_configured": true, 00:11:36.013 "data_offset": 0, 00:11:36.013 "data_size": 65536 00:11:36.013 } 00:11:36.013 ] 00:11:36.013 } 00:11:36.013 } 00:11:36.013 }' 00:11:36.013 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.013 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:36.013 BaseBdev2 00:11:36.013 BaseBdev3' 00:11:36.013 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.013 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:36.013 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.013 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:36.013 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.013 04:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.013 04:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.013 04:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.013 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.013 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.013 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.013 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:36.013 04:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.013 04:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.013 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.273 04:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.273 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.273 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.273 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.273 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:36.273 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.273 04:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.273 04:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.273 04:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.273 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.273 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.273 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:36.273 04:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.273 04:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.273 [2024-11-27 04:28:32.664784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:36.273 04:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.273 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:36.273 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:36.274 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:36.274 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:36.274 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:36.274 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:36.274 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.274 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.274 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.274 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.274 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:36.274 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.274 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.274 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.274 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.274 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.274 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.274 04:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.274 04:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.274 04:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.535 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.535 "name": "Existed_Raid", 00:11:36.535 "uuid": "e5c9a306-8ebd-447a-a9b5-4ae5d727d70a", 00:11:36.535 "strip_size_kb": 0, 00:11:36.535 "state": "online", 00:11:36.535 "raid_level": "raid1", 00:11:36.535 "superblock": false, 00:11:36.535 "num_base_bdevs": 3, 00:11:36.535 "num_base_bdevs_discovered": 2, 00:11:36.535 "num_base_bdevs_operational": 2, 00:11:36.535 "base_bdevs_list": [ 00:11:36.535 { 00:11:36.535 "name": null, 00:11:36.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.535 "is_configured": false, 00:11:36.535 "data_offset": 0, 00:11:36.535 "data_size": 65536 00:11:36.535 }, 00:11:36.535 { 00:11:36.535 "name": "BaseBdev2", 00:11:36.535 "uuid": "125cfb58-880f-43f3-a21a-72705beb979b", 00:11:36.535 "is_configured": true, 00:11:36.535 "data_offset": 0, 00:11:36.535 "data_size": 65536 00:11:36.535 }, 00:11:36.535 { 00:11:36.535 "name": "BaseBdev3", 00:11:36.535 "uuid": "b9ecec3f-922b-4c38-9b6d-c1c563791b22", 00:11:36.535 "is_configured": true, 00:11:36.535 "data_offset": 0, 00:11:36.535 "data_size": 65536 00:11:36.535 } 00:11:36.535 ] 00:11:36.535 }' 00:11:36.535 04:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.535 04:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.795 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:36.795 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.795 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.795 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.795 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:36.795 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.795 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.795 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:36.795 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:36.795 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:36.795 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.795 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.795 [2024-11-27 04:28:33.315638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:37.055 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.055 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.055 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.055 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.055 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.055 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.055 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.055 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.055 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.055 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.055 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:37.055 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.055 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.055 [2024-11-27 04:28:33.492595] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:37.055 [2024-11-27 04:28:33.492843] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:37.055 [2024-11-27 04:28:33.616116] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.055 [2024-11-27 04:28:33.616203] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:37.055 [2024-11-27 04:28:33.616220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:37.055 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.055 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.055 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.055 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.055 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:37.055 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.055 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.055 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.315 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:37.315 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:37.315 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:37.315 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:37.315 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.315 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:37.315 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.315 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.315 BaseBdev2 00:11:37.315 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.315 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:37.315 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:37.315 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.315 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:37.315 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.315 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.315 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.315 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.315 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.315 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.315 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:37.315 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.315 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.315 [ 00:11:37.315 { 00:11:37.315 "name": "BaseBdev2", 00:11:37.315 "aliases": [ 00:11:37.315 "8f7877fb-408c-49ef-8aff-807b22246322" 00:11:37.315 ], 00:11:37.315 "product_name": "Malloc disk", 00:11:37.315 "block_size": 512, 00:11:37.315 "num_blocks": 65536, 00:11:37.315 "uuid": "8f7877fb-408c-49ef-8aff-807b22246322", 00:11:37.315 "assigned_rate_limits": { 00:11:37.315 "rw_ios_per_sec": 0, 00:11:37.315 "rw_mbytes_per_sec": 0, 00:11:37.315 "r_mbytes_per_sec": 0, 00:11:37.315 "w_mbytes_per_sec": 0 00:11:37.315 }, 00:11:37.315 "claimed": false, 00:11:37.315 "zoned": false, 00:11:37.315 "supported_io_types": { 00:11:37.315 "read": true, 00:11:37.315 "write": true, 00:11:37.315 "unmap": true, 00:11:37.315 "flush": true, 00:11:37.315 "reset": true, 00:11:37.315 "nvme_admin": false, 00:11:37.315 "nvme_io": false, 00:11:37.315 "nvme_io_md": false, 00:11:37.315 "write_zeroes": true, 00:11:37.315 "zcopy": true, 00:11:37.315 "get_zone_info": false, 00:11:37.315 "zone_management": false, 00:11:37.315 "zone_append": false, 00:11:37.315 "compare": false, 00:11:37.315 "compare_and_write": false, 00:11:37.315 "abort": true, 00:11:37.315 "seek_hole": false, 00:11:37.315 "seek_data": false, 00:11:37.315 "copy": true, 00:11:37.315 "nvme_iov_md": false 00:11:37.315 }, 00:11:37.315 "memory_domains": [ 00:11:37.315 { 00:11:37.315 "dma_device_id": "system", 00:11:37.315 "dma_device_type": 1 00:11:37.315 }, 00:11:37.315 { 00:11:37.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.315 "dma_device_type": 2 00:11:37.315 } 00:11:37.315 ], 00:11:37.315 "driver_specific": {} 00:11:37.315 } 00:11:37.315 ] 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.316 BaseBdev3 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.316 [ 00:11:37.316 { 00:11:37.316 "name": "BaseBdev3", 00:11:37.316 "aliases": [ 00:11:37.316 "8a0b6c51-4c54-477d-a3ae-1967a568b4fc" 00:11:37.316 ], 00:11:37.316 "product_name": "Malloc disk", 00:11:37.316 "block_size": 512, 00:11:37.316 "num_blocks": 65536, 00:11:37.316 "uuid": "8a0b6c51-4c54-477d-a3ae-1967a568b4fc", 00:11:37.316 "assigned_rate_limits": { 00:11:37.316 "rw_ios_per_sec": 0, 00:11:37.316 "rw_mbytes_per_sec": 0, 00:11:37.316 "r_mbytes_per_sec": 0, 00:11:37.316 "w_mbytes_per_sec": 0 00:11:37.316 }, 00:11:37.316 "claimed": false, 00:11:37.316 "zoned": false, 00:11:37.316 "supported_io_types": { 00:11:37.316 "read": true, 00:11:37.316 "write": true, 00:11:37.316 "unmap": true, 00:11:37.316 "flush": true, 00:11:37.316 "reset": true, 00:11:37.316 "nvme_admin": false, 00:11:37.316 "nvme_io": false, 00:11:37.316 "nvme_io_md": false, 00:11:37.316 "write_zeroes": true, 00:11:37.316 "zcopy": true, 00:11:37.316 "get_zone_info": false, 00:11:37.316 "zone_management": false, 00:11:37.316 "zone_append": false, 00:11:37.316 "compare": false, 00:11:37.316 "compare_and_write": false, 00:11:37.316 "abort": true, 00:11:37.316 "seek_hole": false, 00:11:37.316 "seek_data": false, 00:11:37.316 "copy": true, 00:11:37.316 "nvme_iov_md": false 00:11:37.316 }, 00:11:37.316 "memory_domains": [ 00:11:37.316 { 00:11:37.316 "dma_device_id": "system", 00:11:37.316 "dma_device_type": 1 00:11:37.316 }, 00:11:37.316 { 00:11:37.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.316 "dma_device_type": 2 00:11:37.316 } 00:11:37.316 ], 00:11:37.316 "driver_specific": {} 00:11:37.316 } 00:11:37.316 ] 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.316 [2024-11-27 04:28:33.869831] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:37.316 [2024-11-27 04:28:33.870054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:37.316 [2024-11-27 04:28:33.870137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:37.316 [2024-11-27 04:28:33.872577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.316 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.576 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.576 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.576 "name": "Existed_Raid", 00:11:37.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.576 "strip_size_kb": 0, 00:11:37.576 "state": "configuring", 00:11:37.576 "raid_level": "raid1", 00:11:37.576 "superblock": false, 00:11:37.576 "num_base_bdevs": 3, 00:11:37.576 "num_base_bdevs_discovered": 2, 00:11:37.576 "num_base_bdevs_operational": 3, 00:11:37.576 "base_bdevs_list": [ 00:11:37.576 { 00:11:37.576 "name": "BaseBdev1", 00:11:37.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.576 "is_configured": false, 00:11:37.576 "data_offset": 0, 00:11:37.576 "data_size": 0 00:11:37.576 }, 00:11:37.576 { 00:11:37.576 "name": "BaseBdev2", 00:11:37.576 "uuid": "8f7877fb-408c-49ef-8aff-807b22246322", 00:11:37.576 "is_configured": true, 00:11:37.576 "data_offset": 0, 00:11:37.576 "data_size": 65536 00:11:37.576 }, 00:11:37.576 { 00:11:37.576 "name": "BaseBdev3", 00:11:37.576 "uuid": "8a0b6c51-4c54-477d-a3ae-1967a568b4fc", 00:11:37.576 "is_configured": true, 00:11:37.576 "data_offset": 0, 00:11:37.576 "data_size": 65536 00:11:37.576 } 00:11:37.576 ] 00:11:37.576 }' 00:11:37.576 04:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.576 04:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.835 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:37.835 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.835 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.835 [2024-11-27 04:28:34.384965] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:37.835 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.835 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:37.835 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.835 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.835 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.835 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.835 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:37.835 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.835 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.835 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.835 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.835 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.835 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.835 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.835 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.835 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.093 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.093 "name": "Existed_Raid", 00:11:38.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.093 "strip_size_kb": 0, 00:11:38.093 "state": "configuring", 00:11:38.093 "raid_level": "raid1", 00:11:38.093 "superblock": false, 00:11:38.093 "num_base_bdevs": 3, 00:11:38.093 "num_base_bdevs_discovered": 1, 00:11:38.093 "num_base_bdevs_operational": 3, 00:11:38.094 "base_bdevs_list": [ 00:11:38.094 { 00:11:38.094 "name": "BaseBdev1", 00:11:38.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.094 "is_configured": false, 00:11:38.094 "data_offset": 0, 00:11:38.094 "data_size": 0 00:11:38.094 }, 00:11:38.094 { 00:11:38.094 "name": null, 00:11:38.094 "uuid": "8f7877fb-408c-49ef-8aff-807b22246322", 00:11:38.094 "is_configured": false, 00:11:38.094 "data_offset": 0, 00:11:38.094 "data_size": 65536 00:11:38.094 }, 00:11:38.094 { 00:11:38.094 "name": "BaseBdev3", 00:11:38.094 "uuid": "8a0b6c51-4c54-477d-a3ae-1967a568b4fc", 00:11:38.094 "is_configured": true, 00:11:38.094 "data_offset": 0, 00:11:38.094 "data_size": 65536 00:11:38.094 } 00:11:38.094 ] 00:11:38.094 }' 00:11:38.094 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.094 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.353 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:38.353 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.353 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.353 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.353 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.353 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:38.353 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:38.353 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.353 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.353 [2024-11-27 04:28:34.921943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:38.353 BaseBdev1 00:11:38.353 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.353 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:38.353 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:38.353 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:38.353 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:38.353 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:38.353 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:38.353 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:38.353 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.353 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.353 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.353 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:38.353 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.353 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.612 [ 00:11:38.612 { 00:11:38.612 "name": "BaseBdev1", 00:11:38.612 "aliases": [ 00:11:38.612 "47ae86c5-765b-45b1-a083-86068d37dd0d" 00:11:38.612 ], 00:11:38.612 "product_name": "Malloc disk", 00:11:38.612 "block_size": 512, 00:11:38.612 "num_blocks": 65536, 00:11:38.612 "uuid": "47ae86c5-765b-45b1-a083-86068d37dd0d", 00:11:38.612 "assigned_rate_limits": { 00:11:38.612 "rw_ios_per_sec": 0, 00:11:38.612 "rw_mbytes_per_sec": 0, 00:11:38.612 "r_mbytes_per_sec": 0, 00:11:38.612 "w_mbytes_per_sec": 0 00:11:38.612 }, 00:11:38.612 "claimed": true, 00:11:38.612 "claim_type": "exclusive_write", 00:11:38.612 "zoned": false, 00:11:38.612 "supported_io_types": { 00:11:38.612 "read": true, 00:11:38.612 "write": true, 00:11:38.612 "unmap": true, 00:11:38.612 "flush": true, 00:11:38.612 "reset": true, 00:11:38.612 "nvme_admin": false, 00:11:38.612 "nvme_io": false, 00:11:38.612 "nvme_io_md": false, 00:11:38.612 "write_zeroes": true, 00:11:38.612 "zcopy": true, 00:11:38.612 "get_zone_info": false, 00:11:38.612 "zone_management": false, 00:11:38.612 "zone_append": false, 00:11:38.612 "compare": false, 00:11:38.612 "compare_and_write": false, 00:11:38.612 "abort": true, 00:11:38.612 "seek_hole": false, 00:11:38.612 "seek_data": false, 00:11:38.612 "copy": true, 00:11:38.612 "nvme_iov_md": false 00:11:38.612 }, 00:11:38.612 "memory_domains": [ 00:11:38.612 { 00:11:38.612 "dma_device_id": "system", 00:11:38.612 "dma_device_type": 1 00:11:38.612 }, 00:11:38.612 { 00:11:38.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.612 "dma_device_type": 2 00:11:38.612 } 00:11:38.612 ], 00:11:38.612 "driver_specific": {} 00:11:38.612 } 00:11:38.612 ] 00:11:38.612 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.612 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:38.612 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:38.612 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.612 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.612 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.612 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.612 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.612 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.612 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.612 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.612 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.612 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.612 04:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.612 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.612 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.612 04:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.612 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.612 "name": "Existed_Raid", 00:11:38.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.612 "strip_size_kb": 0, 00:11:38.612 "state": "configuring", 00:11:38.612 "raid_level": "raid1", 00:11:38.612 "superblock": false, 00:11:38.612 "num_base_bdevs": 3, 00:11:38.612 "num_base_bdevs_discovered": 2, 00:11:38.612 "num_base_bdevs_operational": 3, 00:11:38.612 "base_bdevs_list": [ 00:11:38.612 { 00:11:38.612 "name": "BaseBdev1", 00:11:38.612 "uuid": "47ae86c5-765b-45b1-a083-86068d37dd0d", 00:11:38.612 "is_configured": true, 00:11:38.612 "data_offset": 0, 00:11:38.612 "data_size": 65536 00:11:38.612 }, 00:11:38.612 { 00:11:38.612 "name": null, 00:11:38.612 "uuid": "8f7877fb-408c-49ef-8aff-807b22246322", 00:11:38.612 "is_configured": false, 00:11:38.612 "data_offset": 0, 00:11:38.612 "data_size": 65536 00:11:38.612 }, 00:11:38.612 { 00:11:38.612 "name": "BaseBdev3", 00:11:38.612 "uuid": "8a0b6c51-4c54-477d-a3ae-1967a568b4fc", 00:11:38.612 "is_configured": true, 00:11:38.612 "data_offset": 0, 00:11:38.612 "data_size": 65536 00:11:38.612 } 00:11:38.612 ] 00:11:38.612 }' 00:11:38.612 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.612 04:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.872 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.872 04:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.872 04:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.872 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:38.872 04:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.872 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:38.872 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:38.872 04:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.872 04:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.872 [2024-11-27 04:28:35.441223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:38.872 04:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.872 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:38.872 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.872 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.872 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.872 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.872 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.872 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.872 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.872 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.872 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.872 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.872 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.872 04:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.872 04:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.131 04:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.131 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.131 "name": "Existed_Raid", 00:11:39.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.131 "strip_size_kb": 0, 00:11:39.131 "state": "configuring", 00:11:39.131 "raid_level": "raid1", 00:11:39.131 "superblock": false, 00:11:39.131 "num_base_bdevs": 3, 00:11:39.131 "num_base_bdevs_discovered": 1, 00:11:39.131 "num_base_bdevs_operational": 3, 00:11:39.131 "base_bdevs_list": [ 00:11:39.131 { 00:11:39.131 "name": "BaseBdev1", 00:11:39.131 "uuid": "47ae86c5-765b-45b1-a083-86068d37dd0d", 00:11:39.131 "is_configured": true, 00:11:39.131 "data_offset": 0, 00:11:39.131 "data_size": 65536 00:11:39.131 }, 00:11:39.131 { 00:11:39.131 "name": null, 00:11:39.131 "uuid": "8f7877fb-408c-49ef-8aff-807b22246322", 00:11:39.131 "is_configured": false, 00:11:39.131 "data_offset": 0, 00:11:39.131 "data_size": 65536 00:11:39.131 }, 00:11:39.131 { 00:11:39.131 "name": null, 00:11:39.131 "uuid": "8a0b6c51-4c54-477d-a3ae-1967a568b4fc", 00:11:39.131 "is_configured": false, 00:11:39.131 "data_offset": 0, 00:11:39.131 "data_size": 65536 00:11:39.131 } 00:11:39.131 ] 00:11:39.131 }' 00:11:39.131 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.131 04:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.390 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.390 04:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.390 04:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.390 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:39.390 04:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.649 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:39.649 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:39.649 04:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.649 04:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.649 [2024-11-27 04:28:35.992375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:39.649 04:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.649 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:39.649 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.649 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.649 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.649 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.649 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.649 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.649 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.649 04:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.649 04:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.649 04:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.649 04:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.649 04:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.649 04:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.649 04:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.649 04:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.649 "name": "Existed_Raid", 00:11:39.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.649 "strip_size_kb": 0, 00:11:39.649 "state": "configuring", 00:11:39.649 "raid_level": "raid1", 00:11:39.649 "superblock": false, 00:11:39.649 "num_base_bdevs": 3, 00:11:39.649 "num_base_bdevs_discovered": 2, 00:11:39.649 "num_base_bdevs_operational": 3, 00:11:39.649 "base_bdevs_list": [ 00:11:39.649 { 00:11:39.649 "name": "BaseBdev1", 00:11:39.649 "uuid": "47ae86c5-765b-45b1-a083-86068d37dd0d", 00:11:39.649 "is_configured": true, 00:11:39.649 "data_offset": 0, 00:11:39.649 "data_size": 65536 00:11:39.649 }, 00:11:39.649 { 00:11:39.649 "name": null, 00:11:39.649 "uuid": "8f7877fb-408c-49ef-8aff-807b22246322", 00:11:39.649 "is_configured": false, 00:11:39.649 "data_offset": 0, 00:11:39.649 "data_size": 65536 00:11:39.649 }, 00:11:39.649 { 00:11:39.649 "name": "BaseBdev3", 00:11:39.649 "uuid": "8a0b6c51-4c54-477d-a3ae-1967a568b4fc", 00:11:39.649 "is_configured": true, 00:11:39.649 "data_offset": 0, 00:11:39.649 "data_size": 65536 00:11:39.649 } 00:11:39.649 ] 00:11:39.649 }' 00:11:39.649 04:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.649 04:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.908 04:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:39.908 04:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.908 04:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.908 04:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.908 04:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.908 04:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:39.908 04:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:39.908 04:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.908 04:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.908 [2024-11-27 04:28:36.487574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:40.168 04:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.168 04:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:40.168 04:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.168 04:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.168 04:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.168 04:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.168 04:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.168 04:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.168 04:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.168 04:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.168 04:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.168 04:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.168 04:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.168 04:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.168 04:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.168 04:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.168 04:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.168 "name": "Existed_Raid", 00:11:40.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.168 "strip_size_kb": 0, 00:11:40.168 "state": "configuring", 00:11:40.168 "raid_level": "raid1", 00:11:40.168 "superblock": false, 00:11:40.168 "num_base_bdevs": 3, 00:11:40.168 "num_base_bdevs_discovered": 1, 00:11:40.168 "num_base_bdevs_operational": 3, 00:11:40.168 "base_bdevs_list": [ 00:11:40.168 { 00:11:40.168 "name": null, 00:11:40.168 "uuid": "47ae86c5-765b-45b1-a083-86068d37dd0d", 00:11:40.168 "is_configured": false, 00:11:40.168 "data_offset": 0, 00:11:40.168 "data_size": 65536 00:11:40.168 }, 00:11:40.168 { 00:11:40.168 "name": null, 00:11:40.168 "uuid": "8f7877fb-408c-49ef-8aff-807b22246322", 00:11:40.168 "is_configured": false, 00:11:40.168 "data_offset": 0, 00:11:40.168 "data_size": 65536 00:11:40.168 }, 00:11:40.168 { 00:11:40.168 "name": "BaseBdev3", 00:11:40.168 "uuid": "8a0b6c51-4c54-477d-a3ae-1967a568b4fc", 00:11:40.168 "is_configured": true, 00:11:40.168 "data_offset": 0, 00:11:40.168 "data_size": 65536 00:11:40.168 } 00:11:40.168 ] 00:11:40.168 }' 00:11:40.168 04:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.168 04:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.737 [2024-11-27 04:28:37.070334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.737 "name": "Existed_Raid", 00:11:40.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.737 "strip_size_kb": 0, 00:11:40.737 "state": "configuring", 00:11:40.737 "raid_level": "raid1", 00:11:40.737 "superblock": false, 00:11:40.737 "num_base_bdevs": 3, 00:11:40.737 "num_base_bdevs_discovered": 2, 00:11:40.737 "num_base_bdevs_operational": 3, 00:11:40.737 "base_bdevs_list": [ 00:11:40.737 { 00:11:40.737 "name": null, 00:11:40.737 "uuid": "47ae86c5-765b-45b1-a083-86068d37dd0d", 00:11:40.737 "is_configured": false, 00:11:40.737 "data_offset": 0, 00:11:40.737 "data_size": 65536 00:11:40.737 }, 00:11:40.737 { 00:11:40.737 "name": "BaseBdev2", 00:11:40.737 "uuid": "8f7877fb-408c-49ef-8aff-807b22246322", 00:11:40.737 "is_configured": true, 00:11:40.737 "data_offset": 0, 00:11:40.737 "data_size": 65536 00:11:40.737 }, 00:11:40.737 { 00:11:40.737 "name": "BaseBdev3", 00:11:40.737 "uuid": "8a0b6c51-4c54-477d-a3ae-1967a568b4fc", 00:11:40.737 "is_configured": true, 00:11:40.737 "data_offset": 0, 00:11:40.737 "data_size": 65536 00:11:40.737 } 00:11:40.737 ] 00:11:40.737 }' 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.737 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.046 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:41.046 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.046 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.046 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.046 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.046 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:41.046 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.046 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:41.046 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.046 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.046 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.046 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 47ae86c5-765b-45b1-a083-86068d37dd0d 00:11:41.046 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.046 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.046 [2024-11-27 04:28:37.595269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:41.046 [2024-11-27 04:28:37.595486] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:41.046 [2024-11-27 04:28:37.595503] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:41.046 [2024-11-27 04:28:37.595857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:41.046 [2024-11-27 04:28:37.596071] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:41.046 [2024-11-27 04:28:37.596108] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:41.046 [2024-11-27 04:28:37.596479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.046 NewBaseBdev 00:11:41.047 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.047 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:41.047 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:41.047 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:41.047 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:41.047 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:41.047 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:41.047 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:41.047 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.047 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.323 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.323 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:41.323 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.323 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.323 [ 00:11:41.323 { 00:11:41.323 "name": "NewBaseBdev", 00:11:41.323 "aliases": [ 00:11:41.323 "47ae86c5-765b-45b1-a083-86068d37dd0d" 00:11:41.323 ], 00:11:41.323 "product_name": "Malloc disk", 00:11:41.323 "block_size": 512, 00:11:41.323 "num_blocks": 65536, 00:11:41.323 "uuid": "47ae86c5-765b-45b1-a083-86068d37dd0d", 00:11:41.323 "assigned_rate_limits": { 00:11:41.323 "rw_ios_per_sec": 0, 00:11:41.323 "rw_mbytes_per_sec": 0, 00:11:41.323 "r_mbytes_per_sec": 0, 00:11:41.323 "w_mbytes_per_sec": 0 00:11:41.323 }, 00:11:41.323 "claimed": true, 00:11:41.323 "claim_type": "exclusive_write", 00:11:41.323 "zoned": false, 00:11:41.323 "supported_io_types": { 00:11:41.323 "read": true, 00:11:41.323 "write": true, 00:11:41.323 "unmap": true, 00:11:41.323 "flush": true, 00:11:41.323 "reset": true, 00:11:41.323 "nvme_admin": false, 00:11:41.323 "nvme_io": false, 00:11:41.323 "nvme_io_md": false, 00:11:41.323 "write_zeroes": true, 00:11:41.323 "zcopy": true, 00:11:41.323 "get_zone_info": false, 00:11:41.323 "zone_management": false, 00:11:41.323 "zone_append": false, 00:11:41.323 "compare": false, 00:11:41.323 "compare_and_write": false, 00:11:41.323 "abort": true, 00:11:41.323 "seek_hole": false, 00:11:41.323 "seek_data": false, 00:11:41.323 "copy": true, 00:11:41.323 "nvme_iov_md": false 00:11:41.323 }, 00:11:41.323 "memory_domains": [ 00:11:41.323 { 00:11:41.323 "dma_device_id": "system", 00:11:41.323 "dma_device_type": 1 00:11:41.323 }, 00:11:41.323 { 00:11:41.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.323 "dma_device_type": 2 00:11:41.323 } 00:11:41.323 ], 00:11:41.323 "driver_specific": {} 00:11:41.323 } 00:11:41.323 ] 00:11:41.323 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.323 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:41.323 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:41.323 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.323 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.323 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.323 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.323 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.323 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.323 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.323 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.323 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.323 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.323 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.323 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.323 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.323 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.323 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.323 "name": "Existed_Raid", 00:11:41.323 "uuid": "2fe359a7-1a76-4a1b-970e-e49731409244", 00:11:41.323 "strip_size_kb": 0, 00:11:41.323 "state": "online", 00:11:41.323 "raid_level": "raid1", 00:11:41.323 "superblock": false, 00:11:41.323 "num_base_bdevs": 3, 00:11:41.323 "num_base_bdevs_discovered": 3, 00:11:41.323 "num_base_bdevs_operational": 3, 00:11:41.323 "base_bdevs_list": [ 00:11:41.323 { 00:11:41.323 "name": "NewBaseBdev", 00:11:41.323 "uuid": "47ae86c5-765b-45b1-a083-86068d37dd0d", 00:11:41.323 "is_configured": true, 00:11:41.323 "data_offset": 0, 00:11:41.323 "data_size": 65536 00:11:41.323 }, 00:11:41.323 { 00:11:41.323 "name": "BaseBdev2", 00:11:41.323 "uuid": "8f7877fb-408c-49ef-8aff-807b22246322", 00:11:41.323 "is_configured": true, 00:11:41.323 "data_offset": 0, 00:11:41.323 "data_size": 65536 00:11:41.323 }, 00:11:41.323 { 00:11:41.323 "name": "BaseBdev3", 00:11:41.323 "uuid": "8a0b6c51-4c54-477d-a3ae-1967a568b4fc", 00:11:41.323 "is_configured": true, 00:11:41.323 "data_offset": 0, 00:11:41.323 "data_size": 65536 00:11:41.324 } 00:11:41.324 ] 00:11:41.324 }' 00:11:41.324 04:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.324 04:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.584 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:41.584 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:41.584 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:41.584 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:41.584 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:41.584 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:41.584 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:41.584 04:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.584 04:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.584 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:41.584 [2024-11-27 04:28:38.138781] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.584 04:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:41.844 "name": "Existed_Raid", 00:11:41.844 "aliases": [ 00:11:41.844 "2fe359a7-1a76-4a1b-970e-e49731409244" 00:11:41.844 ], 00:11:41.844 "product_name": "Raid Volume", 00:11:41.844 "block_size": 512, 00:11:41.844 "num_blocks": 65536, 00:11:41.844 "uuid": "2fe359a7-1a76-4a1b-970e-e49731409244", 00:11:41.844 "assigned_rate_limits": { 00:11:41.844 "rw_ios_per_sec": 0, 00:11:41.844 "rw_mbytes_per_sec": 0, 00:11:41.844 "r_mbytes_per_sec": 0, 00:11:41.844 "w_mbytes_per_sec": 0 00:11:41.844 }, 00:11:41.844 "claimed": false, 00:11:41.844 "zoned": false, 00:11:41.844 "supported_io_types": { 00:11:41.844 "read": true, 00:11:41.844 "write": true, 00:11:41.844 "unmap": false, 00:11:41.844 "flush": false, 00:11:41.844 "reset": true, 00:11:41.844 "nvme_admin": false, 00:11:41.844 "nvme_io": false, 00:11:41.844 "nvme_io_md": false, 00:11:41.844 "write_zeroes": true, 00:11:41.844 "zcopy": false, 00:11:41.844 "get_zone_info": false, 00:11:41.844 "zone_management": false, 00:11:41.844 "zone_append": false, 00:11:41.844 "compare": false, 00:11:41.844 "compare_and_write": false, 00:11:41.844 "abort": false, 00:11:41.844 "seek_hole": false, 00:11:41.844 "seek_data": false, 00:11:41.844 "copy": false, 00:11:41.844 "nvme_iov_md": false 00:11:41.844 }, 00:11:41.844 "memory_domains": [ 00:11:41.844 { 00:11:41.844 "dma_device_id": "system", 00:11:41.844 "dma_device_type": 1 00:11:41.844 }, 00:11:41.844 { 00:11:41.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.844 "dma_device_type": 2 00:11:41.844 }, 00:11:41.844 { 00:11:41.844 "dma_device_id": "system", 00:11:41.844 "dma_device_type": 1 00:11:41.844 }, 00:11:41.844 { 00:11:41.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.844 "dma_device_type": 2 00:11:41.844 }, 00:11:41.844 { 00:11:41.844 "dma_device_id": "system", 00:11:41.844 "dma_device_type": 1 00:11:41.844 }, 00:11:41.844 { 00:11:41.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.844 "dma_device_type": 2 00:11:41.844 } 00:11:41.844 ], 00:11:41.844 "driver_specific": { 00:11:41.844 "raid": { 00:11:41.844 "uuid": "2fe359a7-1a76-4a1b-970e-e49731409244", 00:11:41.844 "strip_size_kb": 0, 00:11:41.844 "state": "online", 00:11:41.844 "raid_level": "raid1", 00:11:41.844 "superblock": false, 00:11:41.844 "num_base_bdevs": 3, 00:11:41.844 "num_base_bdevs_discovered": 3, 00:11:41.844 "num_base_bdevs_operational": 3, 00:11:41.844 "base_bdevs_list": [ 00:11:41.844 { 00:11:41.844 "name": "NewBaseBdev", 00:11:41.844 "uuid": "47ae86c5-765b-45b1-a083-86068d37dd0d", 00:11:41.844 "is_configured": true, 00:11:41.844 "data_offset": 0, 00:11:41.844 "data_size": 65536 00:11:41.844 }, 00:11:41.844 { 00:11:41.844 "name": "BaseBdev2", 00:11:41.844 "uuid": "8f7877fb-408c-49ef-8aff-807b22246322", 00:11:41.844 "is_configured": true, 00:11:41.844 "data_offset": 0, 00:11:41.844 "data_size": 65536 00:11:41.844 }, 00:11:41.844 { 00:11:41.844 "name": "BaseBdev3", 00:11:41.844 "uuid": "8a0b6c51-4c54-477d-a3ae-1967a568b4fc", 00:11:41.844 "is_configured": true, 00:11:41.844 "data_offset": 0, 00:11:41.844 "data_size": 65536 00:11:41.844 } 00:11:41.844 ] 00:11:41.844 } 00:11:41.844 } 00:11:41.844 }' 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:41.844 BaseBdev2 00:11:41.844 BaseBdev3' 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.844 04:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.844 [2024-11-27 04:28:38.414003] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:41.844 [2024-11-27 04:28:38.414166] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:41.845 [2024-11-27 04:28:38.414315] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.845 [2024-11-27 04:28:38.414712] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.845 [2024-11-27 04:28:38.414771] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:41.845 04:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.845 04:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67638 00:11:41.845 04:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67638 ']' 00:11:41.845 04:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67638 00:11:41.845 04:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:41.845 04:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.104 04:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67638 00:11:42.104 killing process with pid 67638 00:11:42.104 04:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:42.104 04:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:42.104 04:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67638' 00:11:42.105 04:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67638 00:11:42.105 04:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67638 00:11:42.105 [2024-11-27 04:28:38.452706] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:42.364 [2024-11-27 04:28:38.819417] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:43.740 04:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:43.740 00:11:43.740 real 0m11.458s 00:11:43.740 user 0m17.881s 00:11:43.740 sys 0m1.996s 00:11:43.740 ************************************ 00:11:43.740 END TEST raid_state_function_test 00:11:43.740 ************************************ 00:11:43.740 04:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.740 04:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.740 04:28:40 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:11:43.740 04:28:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:43.740 04:28:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.740 04:28:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:43.740 ************************************ 00:11:43.740 START TEST raid_state_function_test_sb 00:11:43.740 ************************************ 00:11:43.740 04:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:11:43.740 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:43.740 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:43.740 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:43.740 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:43.740 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:43.740 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.740 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:43.740 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.740 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.740 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:43.740 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.740 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.740 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:43.741 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.741 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.741 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:43.741 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:43.741 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:43.741 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:43.741 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:43.741 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:43.741 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:43.741 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:43.741 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:43.741 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:43.741 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68270 00:11:43.741 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:43.741 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68270' 00:11:43.741 Process raid pid: 68270 00:11:43.741 04:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68270 00:11:43.741 04:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68270 ']' 00:11:43.741 04:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.741 04:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.741 04:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.741 04:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.741 04:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.000 [2024-11-27 04:28:40.384673] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:44.000 [2024-11-27 04:28:40.384886] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.000 [2024-11-27 04:28:40.551780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.260 [2024-11-27 04:28:40.711563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.520 [2024-11-27 04:28:40.997061] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.520 [2024-11-27 04:28:40.997156] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.779 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.779 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:44.779 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:44.779 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.779 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.779 [2024-11-27 04:28:41.362918] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:45.044 [2024-11-27 04:28:41.363110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:45.044 [2024-11-27 04:28:41.363139] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:45.044 [2024-11-27 04:28:41.363152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:45.044 [2024-11-27 04:28:41.363160] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:45.044 [2024-11-27 04:28:41.363171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:45.044 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.044 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:45.044 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.044 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.044 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.044 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.044 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.045 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.045 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.045 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.045 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.045 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.045 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.045 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.045 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.045 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.045 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.045 "name": "Existed_Raid", 00:11:45.045 "uuid": "6cb545c8-0acb-478c-bfa9-05f59c86a336", 00:11:45.045 "strip_size_kb": 0, 00:11:45.045 "state": "configuring", 00:11:45.045 "raid_level": "raid1", 00:11:45.045 "superblock": true, 00:11:45.045 "num_base_bdevs": 3, 00:11:45.045 "num_base_bdevs_discovered": 0, 00:11:45.045 "num_base_bdevs_operational": 3, 00:11:45.045 "base_bdevs_list": [ 00:11:45.045 { 00:11:45.045 "name": "BaseBdev1", 00:11:45.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.045 "is_configured": false, 00:11:45.045 "data_offset": 0, 00:11:45.045 "data_size": 0 00:11:45.045 }, 00:11:45.045 { 00:11:45.045 "name": "BaseBdev2", 00:11:45.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.045 "is_configured": false, 00:11:45.045 "data_offset": 0, 00:11:45.045 "data_size": 0 00:11:45.045 }, 00:11:45.045 { 00:11:45.045 "name": "BaseBdev3", 00:11:45.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.045 "is_configured": false, 00:11:45.045 "data_offset": 0, 00:11:45.045 "data_size": 0 00:11:45.045 } 00:11:45.045 ] 00:11:45.045 }' 00:11:45.045 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.045 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.316 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:45.316 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.316 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.316 [2024-11-27 04:28:41.850026] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:45.316 [2024-11-27 04:28:41.850209] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:45.316 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.316 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:45.316 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.316 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.316 [2024-11-27 04:28:41.858004] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:45.316 [2024-11-27 04:28:41.858169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:45.316 [2024-11-27 04:28:41.858208] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:45.316 [2024-11-27 04:28:41.858236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:45.316 [2024-11-27 04:28:41.858273] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:45.316 [2024-11-27 04:28:41.858301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:45.316 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.316 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:45.316 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.316 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.575 [2024-11-27 04:28:41.914178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.575 BaseBdev1 00:11:45.575 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.575 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:45.575 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:45.575 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:45.575 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:45.575 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:45.575 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:45.575 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:45.575 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.575 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.575 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.575 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:45.575 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.575 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.575 [ 00:11:45.575 { 00:11:45.575 "name": "BaseBdev1", 00:11:45.575 "aliases": [ 00:11:45.575 "d8a00521-0453-4f92-a14c-a11a8be0f090" 00:11:45.575 ], 00:11:45.575 "product_name": "Malloc disk", 00:11:45.575 "block_size": 512, 00:11:45.575 "num_blocks": 65536, 00:11:45.575 "uuid": "d8a00521-0453-4f92-a14c-a11a8be0f090", 00:11:45.575 "assigned_rate_limits": { 00:11:45.575 "rw_ios_per_sec": 0, 00:11:45.575 "rw_mbytes_per_sec": 0, 00:11:45.575 "r_mbytes_per_sec": 0, 00:11:45.575 "w_mbytes_per_sec": 0 00:11:45.575 }, 00:11:45.575 "claimed": true, 00:11:45.575 "claim_type": "exclusive_write", 00:11:45.575 "zoned": false, 00:11:45.575 "supported_io_types": { 00:11:45.575 "read": true, 00:11:45.575 "write": true, 00:11:45.575 "unmap": true, 00:11:45.575 "flush": true, 00:11:45.575 "reset": true, 00:11:45.575 "nvme_admin": false, 00:11:45.575 "nvme_io": false, 00:11:45.575 "nvme_io_md": false, 00:11:45.575 "write_zeroes": true, 00:11:45.575 "zcopy": true, 00:11:45.575 "get_zone_info": false, 00:11:45.575 "zone_management": false, 00:11:45.575 "zone_append": false, 00:11:45.575 "compare": false, 00:11:45.575 "compare_and_write": false, 00:11:45.575 "abort": true, 00:11:45.575 "seek_hole": false, 00:11:45.575 "seek_data": false, 00:11:45.575 "copy": true, 00:11:45.575 "nvme_iov_md": false 00:11:45.575 }, 00:11:45.575 "memory_domains": [ 00:11:45.575 { 00:11:45.575 "dma_device_id": "system", 00:11:45.575 "dma_device_type": 1 00:11:45.575 }, 00:11:45.575 { 00:11:45.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.575 "dma_device_type": 2 00:11:45.575 } 00:11:45.575 ], 00:11:45.575 "driver_specific": {} 00:11:45.575 } 00:11:45.575 ] 00:11:45.575 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.575 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:45.575 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:45.575 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.575 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.575 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.575 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.575 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.575 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.575 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.575 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.575 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.576 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.576 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.576 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.576 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.576 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.576 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.576 "name": "Existed_Raid", 00:11:45.576 "uuid": "3c304ba8-67c6-482c-9a42-4dfdb2369b3b", 00:11:45.576 "strip_size_kb": 0, 00:11:45.576 "state": "configuring", 00:11:45.576 "raid_level": "raid1", 00:11:45.576 "superblock": true, 00:11:45.576 "num_base_bdevs": 3, 00:11:45.576 "num_base_bdevs_discovered": 1, 00:11:45.576 "num_base_bdevs_operational": 3, 00:11:45.576 "base_bdevs_list": [ 00:11:45.576 { 00:11:45.576 "name": "BaseBdev1", 00:11:45.576 "uuid": "d8a00521-0453-4f92-a14c-a11a8be0f090", 00:11:45.576 "is_configured": true, 00:11:45.576 "data_offset": 2048, 00:11:45.576 "data_size": 63488 00:11:45.576 }, 00:11:45.576 { 00:11:45.576 "name": "BaseBdev2", 00:11:45.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.576 "is_configured": false, 00:11:45.576 "data_offset": 0, 00:11:45.576 "data_size": 0 00:11:45.576 }, 00:11:45.576 { 00:11:45.576 "name": "BaseBdev3", 00:11:45.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.576 "is_configured": false, 00:11:45.576 "data_offset": 0, 00:11:45.576 "data_size": 0 00:11:45.576 } 00:11:45.576 ] 00:11:45.576 }' 00:11:45.576 04:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.576 04:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.836 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:45.836 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.836 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.836 [2024-11-27 04:28:42.389452] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:45.836 [2024-11-27 04:28:42.389544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:45.836 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.836 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:45.836 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.836 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.836 [2024-11-27 04:28:42.397529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.836 [2024-11-27 04:28:42.399896] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:45.836 [2024-11-27 04:28:42.399953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:45.836 [2024-11-27 04:28:42.399965] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:45.836 [2024-11-27 04:28:42.399976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:45.836 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.836 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:45.836 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:45.836 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:45.836 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.836 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.836 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.836 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.836 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.836 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.836 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.836 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.836 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.836 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.836 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.836 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.836 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.836 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.096 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.096 "name": "Existed_Raid", 00:11:46.096 "uuid": "5c9133cf-1365-458e-9fec-cf67d233f1f1", 00:11:46.096 "strip_size_kb": 0, 00:11:46.096 "state": "configuring", 00:11:46.096 "raid_level": "raid1", 00:11:46.096 "superblock": true, 00:11:46.096 "num_base_bdevs": 3, 00:11:46.096 "num_base_bdevs_discovered": 1, 00:11:46.096 "num_base_bdevs_operational": 3, 00:11:46.096 "base_bdevs_list": [ 00:11:46.096 { 00:11:46.096 "name": "BaseBdev1", 00:11:46.096 "uuid": "d8a00521-0453-4f92-a14c-a11a8be0f090", 00:11:46.096 "is_configured": true, 00:11:46.096 "data_offset": 2048, 00:11:46.096 "data_size": 63488 00:11:46.096 }, 00:11:46.096 { 00:11:46.096 "name": "BaseBdev2", 00:11:46.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.096 "is_configured": false, 00:11:46.096 "data_offset": 0, 00:11:46.096 "data_size": 0 00:11:46.096 }, 00:11:46.096 { 00:11:46.096 "name": "BaseBdev3", 00:11:46.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.096 "is_configured": false, 00:11:46.096 "data_offset": 0, 00:11:46.096 "data_size": 0 00:11:46.096 } 00:11:46.096 ] 00:11:46.096 }' 00:11:46.096 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.096 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.356 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:46.356 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.356 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.356 [2024-11-27 04:28:42.904871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:46.356 BaseBdev2 00:11:46.356 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.356 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:46.356 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:46.356 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:46.356 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:46.356 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:46.357 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:46.357 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:46.357 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.357 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.357 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.357 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:46.357 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.357 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.357 [ 00:11:46.357 { 00:11:46.357 "name": "BaseBdev2", 00:11:46.357 "aliases": [ 00:11:46.357 "8f9d49f9-5fbe-4467-8c5c-2fe87f78f599" 00:11:46.357 ], 00:11:46.357 "product_name": "Malloc disk", 00:11:46.357 "block_size": 512, 00:11:46.357 "num_blocks": 65536, 00:11:46.357 "uuid": "8f9d49f9-5fbe-4467-8c5c-2fe87f78f599", 00:11:46.357 "assigned_rate_limits": { 00:11:46.357 "rw_ios_per_sec": 0, 00:11:46.357 "rw_mbytes_per_sec": 0, 00:11:46.357 "r_mbytes_per_sec": 0, 00:11:46.357 "w_mbytes_per_sec": 0 00:11:46.357 }, 00:11:46.357 "claimed": true, 00:11:46.357 "claim_type": "exclusive_write", 00:11:46.357 "zoned": false, 00:11:46.357 "supported_io_types": { 00:11:46.357 "read": true, 00:11:46.357 "write": true, 00:11:46.357 "unmap": true, 00:11:46.357 "flush": true, 00:11:46.357 "reset": true, 00:11:46.357 "nvme_admin": false, 00:11:46.357 "nvme_io": false, 00:11:46.357 "nvme_io_md": false, 00:11:46.357 "write_zeroes": true, 00:11:46.357 "zcopy": true, 00:11:46.357 "get_zone_info": false, 00:11:46.357 "zone_management": false, 00:11:46.357 "zone_append": false, 00:11:46.357 "compare": false, 00:11:46.357 "compare_and_write": false, 00:11:46.357 "abort": true, 00:11:46.357 "seek_hole": false, 00:11:46.357 "seek_data": false, 00:11:46.357 "copy": true, 00:11:46.357 "nvme_iov_md": false 00:11:46.357 }, 00:11:46.357 "memory_domains": [ 00:11:46.357 { 00:11:46.357 "dma_device_id": "system", 00:11:46.357 "dma_device_type": 1 00:11:46.357 }, 00:11:46.357 { 00:11:46.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.357 "dma_device_type": 2 00:11:46.357 } 00:11:46.357 ], 00:11:46.357 "driver_specific": {} 00:11:46.357 } 00:11:46.357 ] 00:11:46.357 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.357 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:46.357 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:46.357 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:46.357 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:46.357 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.357 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.357 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.357 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.357 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.357 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.357 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.357 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.357 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.357 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.357 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.357 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.357 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.616 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.616 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.616 "name": "Existed_Raid", 00:11:46.616 "uuid": "5c9133cf-1365-458e-9fec-cf67d233f1f1", 00:11:46.616 "strip_size_kb": 0, 00:11:46.616 "state": "configuring", 00:11:46.616 "raid_level": "raid1", 00:11:46.616 "superblock": true, 00:11:46.616 "num_base_bdevs": 3, 00:11:46.616 "num_base_bdevs_discovered": 2, 00:11:46.616 "num_base_bdevs_operational": 3, 00:11:46.616 "base_bdevs_list": [ 00:11:46.616 { 00:11:46.616 "name": "BaseBdev1", 00:11:46.616 "uuid": "d8a00521-0453-4f92-a14c-a11a8be0f090", 00:11:46.616 "is_configured": true, 00:11:46.616 "data_offset": 2048, 00:11:46.616 "data_size": 63488 00:11:46.616 }, 00:11:46.616 { 00:11:46.616 "name": "BaseBdev2", 00:11:46.616 "uuid": "8f9d49f9-5fbe-4467-8c5c-2fe87f78f599", 00:11:46.616 "is_configured": true, 00:11:46.616 "data_offset": 2048, 00:11:46.616 "data_size": 63488 00:11:46.616 }, 00:11:46.616 { 00:11:46.616 "name": "BaseBdev3", 00:11:46.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.616 "is_configured": false, 00:11:46.616 "data_offset": 0, 00:11:46.616 "data_size": 0 00:11:46.616 } 00:11:46.616 ] 00:11:46.616 }' 00:11:46.616 04:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.616 04:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.876 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:46.876 04:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.876 04:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.876 [2024-11-27 04:28:43.441266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:46.876 [2024-11-27 04:28:43.441719] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:46.876 [2024-11-27 04:28:43.441786] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:46.876 BaseBdev3 00:11:46.876 [2024-11-27 04:28:43.442182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:46.876 [2024-11-27 04:28:43.442369] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:46.876 [2024-11-27 04:28:43.442423] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:46.876 [2024-11-27 04:28:43.442651] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.876 04:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.876 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:46.876 04:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:46.876 04:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:46.876 04:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:46.876 04:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:46.876 04:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:46.876 04:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:46.876 04:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.876 04:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.876 04:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.876 04:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:46.876 04:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.876 04:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.136 [ 00:11:47.136 { 00:11:47.136 "name": "BaseBdev3", 00:11:47.136 "aliases": [ 00:11:47.136 "e34fd21d-2453-4f63-82ca-8cdd086cd50c" 00:11:47.136 ], 00:11:47.136 "product_name": "Malloc disk", 00:11:47.136 "block_size": 512, 00:11:47.136 "num_blocks": 65536, 00:11:47.136 "uuid": "e34fd21d-2453-4f63-82ca-8cdd086cd50c", 00:11:47.136 "assigned_rate_limits": { 00:11:47.136 "rw_ios_per_sec": 0, 00:11:47.136 "rw_mbytes_per_sec": 0, 00:11:47.136 "r_mbytes_per_sec": 0, 00:11:47.136 "w_mbytes_per_sec": 0 00:11:47.136 }, 00:11:47.136 "claimed": true, 00:11:47.136 "claim_type": "exclusive_write", 00:11:47.136 "zoned": false, 00:11:47.136 "supported_io_types": { 00:11:47.136 "read": true, 00:11:47.136 "write": true, 00:11:47.136 "unmap": true, 00:11:47.136 "flush": true, 00:11:47.136 "reset": true, 00:11:47.136 "nvme_admin": false, 00:11:47.136 "nvme_io": false, 00:11:47.136 "nvme_io_md": false, 00:11:47.136 "write_zeroes": true, 00:11:47.136 "zcopy": true, 00:11:47.136 "get_zone_info": false, 00:11:47.136 "zone_management": false, 00:11:47.136 "zone_append": false, 00:11:47.136 "compare": false, 00:11:47.136 "compare_and_write": false, 00:11:47.136 "abort": true, 00:11:47.136 "seek_hole": false, 00:11:47.136 "seek_data": false, 00:11:47.136 "copy": true, 00:11:47.136 "nvme_iov_md": false 00:11:47.136 }, 00:11:47.136 "memory_domains": [ 00:11:47.136 { 00:11:47.136 "dma_device_id": "system", 00:11:47.136 "dma_device_type": 1 00:11:47.136 }, 00:11:47.136 { 00:11:47.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.136 "dma_device_type": 2 00:11:47.136 } 00:11:47.136 ], 00:11:47.136 "driver_specific": {} 00:11:47.136 } 00:11:47.136 ] 00:11:47.136 04:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.136 04:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:47.136 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:47.136 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:47.136 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:47.136 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.136 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.136 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.136 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.136 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.136 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.136 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.136 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.136 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.136 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.136 04:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.136 04:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.136 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.136 04:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.136 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.136 "name": "Existed_Raid", 00:11:47.136 "uuid": "5c9133cf-1365-458e-9fec-cf67d233f1f1", 00:11:47.136 "strip_size_kb": 0, 00:11:47.136 "state": "online", 00:11:47.136 "raid_level": "raid1", 00:11:47.136 "superblock": true, 00:11:47.136 "num_base_bdevs": 3, 00:11:47.136 "num_base_bdevs_discovered": 3, 00:11:47.136 "num_base_bdevs_operational": 3, 00:11:47.136 "base_bdevs_list": [ 00:11:47.136 { 00:11:47.136 "name": "BaseBdev1", 00:11:47.136 "uuid": "d8a00521-0453-4f92-a14c-a11a8be0f090", 00:11:47.136 "is_configured": true, 00:11:47.136 "data_offset": 2048, 00:11:47.136 "data_size": 63488 00:11:47.136 }, 00:11:47.136 { 00:11:47.136 "name": "BaseBdev2", 00:11:47.136 "uuid": "8f9d49f9-5fbe-4467-8c5c-2fe87f78f599", 00:11:47.136 "is_configured": true, 00:11:47.136 "data_offset": 2048, 00:11:47.136 "data_size": 63488 00:11:47.136 }, 00:11:47.136 { 00:11:47.136 "name": "BaseBdev3", 00:11:47.136 "uuid": "e34fd21d-2453-4f63-82ca-8cdd086cd50c", 00:11:47.136 "is_configured": true, 00:11:47.136 "data_offset": 2048, 00:11:47.136 "data_size": 63488 00:11:47.136 } 00:11:47.136 ] 00:11:47.136 }' 00:11:47.136 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.136 04:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.396 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:47.396 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:47.396 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:47.396 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:47.396 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:47.396 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:47.396 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:47.396 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:47.396 04:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.396 04:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.396 [2024-11-27 04:28:43.944873] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:47.396 04:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.396 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:47.396 "name": "Existed_Raid", 00:11:47.396 "aliases": [ 00:11:47.396 "5c9133cf-1365-458e-9fec-cf67d233f1f1" 00:11:47.396 ], 00:11:47.396 "product_name": "Raid Volume", 00:11:47.396 "block_size": 512, 00:11:47.396 "num_blocks": 63488, 00:11:47.396 "uuid": "5c9133cf-1365-458e-9fec-cf67d233f1f1", 00:11:47.396 "assigned_rate_limits": { 00:11:47.396 "rw_ios_per_sec": 0, 00:11:47.396 "rw_mbytes_per_sec": 0, 00:11:47.396 "r_mbytes_per_sec": 0, 00:11:47.396 "w_mbytes_per_sec": 0 00:11:47.396 }, 00:11:47.396 "claimed": false, 00:11:47.396 "zoned": false, 00:11:47.397 "supported_io_types": { 00:11:47.397 "read": true, 00:11:47.397 "write": true, 00:11:47.397 "unmap": false, 00:11:47.397 "flush": false, 00:11:47.397 "reset": true, 00:11:47.397 "nvme_admin": false, 00:11:47.397 "nvme_io": false, 00:11:47.397 "nvme_io_md": false, 00:11:47.397 "write_zeroes": true, 00:11:47.397 "zcopy": false, 00:11:47.397 "get_zone_info": false, 00:11:47.397 "zone_management": false, 00:11:47.397 "zone_append": false, 00:11:47.397 "compare": false, 00:11:47.397 "compare_and_write": false, 00:11:47.397 "abort": false, 00:11:47.397 "seek_hole": false, 00:11:47.397 "seek_data": false, 00:11:47.397 "copy": false, 00:11:47.397 "nvme_iov_md": false 00:11:47.397 }, 00:11:47.397 "memory_domains": [ 00:11:47.397 { 00:11:47.397 "dma_device_id": "system", 00:11:47.397 "dma_device_type": 1 00:11:47.397 }, 00:11:47.397 { 00:11:47.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.397 "dma_device_type": 2 00:11:47.397 }, 00:11:47.397 { 00:11:47.397 "dma_device_id": "system", 00:11:47.397 "dma_device_type": 1 00:11:47.397 }, 00:11:47.397 { 00:11:47.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.397 "dma_device_type": 2 00:11:47.397 }, 00:11:47.397 { 00:11:47.397 "dma_device_id": "system", 00:11:47.397 "dma_device_type": 1 00:11:47.397 }, 00:11:47.397 { 00:11:47.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.397 "dma_device_type": 2 00:11:47.397 } 00:11:47.397 ], 00:11:47.397 "driver_specific": { 00:11:47.397 "raid": { 00:11:47.397 "uuid": "5c9133cf-1365-458e-9fec-cf67d233f1f1", 00:11:47.397 "strip_size_kb": 0, 00:11:47.397 "state": "online", 00:11:47.397 "raid_level": "raid1", 00:11:47.397 "superblock": true, 00:11:47.397 "num_base_bdevs": 3, 00:11:47.397 "num_base_bdevs_discovered": 3, 00:11:47.397 "num_base_bdevs_operational": 3, 00:11:47.397 "base_bdevs_list": [ 00:11:47.397 { 00:11:47.397 "name": "BaseBdev1", 00:11:47.397 "uuid": "d8a00521-0453-4f92-a14c-a11a8be0f090", 00:11:47.397 "is_configured": true, 00:11:47.397 "data_offset": 2048, 00:11:47.397 "data_size": 63488 00:11:47.397 }, 00:11:47.397 { 00:11:47.397 "name": "BaseBdev2", 00:11:47.397 "uuid": "8f9d49f9-5fbe-4467-8c5c-2fe87f78f599", 00:11:47.397 "is_configured": true, 00:11:47.397 "data_offset": 2048, 00:11:47.397 "data_size": 63488 00:11:47.397 }, 00:11:47.397 { 00:11:47.397 "name": "BaseBdev3", 00:11:47.397 "uuid": "e34fd21d-2453-4f63-82ca-8cdd086cd50c", 00:11:47.397 "is_configured": true, 00:11:47.397 "data_offset": 2048, 00:11:47.397 "data_size": 63488 00:11:47.397 } 00:11:47.397 ] 00:11:47.397 } 00:11:47.397 } 00:11:47.397 }' 00:11:47.397 04:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:47.657 BaseBdev2 00:11:47.657 BaseBdev3' 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.657 04:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.657 [2024-11-27 04:28:44.204217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:47.917 04:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.917 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:47.917 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:47.917 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:47.917 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:47.917 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:47.917 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:47.917 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.917 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.917 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.917 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.917 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:47.917 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.917 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.917 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.917 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.917 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.917 04:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.917 04:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.917 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.917 04:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.917 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.917 "name": "Existed_Raid", 00:11:47.917 "uuid": "5c9133cf-1365-458e-9fec-cf67d233f1f1", 00:11:47.917 "strip_size_kb": 0, 00:11:47.917 "state": "online", 00:11:47.917 "raid_level": "raid1", 00:11:47.917 "superblock": true, 00:11:47.917 "num_base_bdevs": 3, 00:11:47.917 "num_base_bdevs_discovered": 2, 00:11:47.917 "num_base_bdevs_operational": 2, 00:11:47.917 "base_bdevs_list": [ 00:11:47.917 { 00:11:47.917 "name": null, 00:11:47.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.917 "is_configured": false, 00:11:47.917 "data_offset": 0, 00:11:47.917 "data_size": 63488 00:11:47.917 }, 00:11:47.917 { 00:11:47.917 "name": "BaseBdev2", 00:11:47.917 "uuid": "8f9d49f9-5fbe-4467-8c5c-2fe87f78f599", 00:11:47.917 "is_configured": true, 00:11:47.917 "data_offset": 2048, 00:11:47.917 "data_size": 63488 00:11:47.917 }, 00:11:47.917 { 00:11:47.917 "name": "BaseBdev3", 00:11:47.917 "uuid": "e34fd21d-2453-4f63-82ca-8cdd086cd50c", 00:11:47.917 "is_configured": true, 00:11:47.917 "data_offset": 2048, 00:11:47.917 "data_size": 63488 00:11:47.917 } 00:11:47.917 ] 00:11:47.917 }' 00:11:47.917 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.917 04:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.485 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:48.485 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:48.485 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:48.485 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.485 04:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.485 04:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.485 04:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.485 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:48.485 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:48.485 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:48.485 04:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.485 04:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.485 [2024-11-27 04:28:44.850947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:48.485 04:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.485 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:48.485 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:48.485 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:48.485 04:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.485 04:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.485 04:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.485 04:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.485 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:48.485 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:48.485 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:48.485 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.485 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.485 [2024-11-27 04:28:45.020632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:48.485 [2024-11-27 04:28:45.020800] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.744 [2024-11-27 04:28:45.141866] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.744 [2024-11-27 04:28:45.142062] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.744 [2024-11-27 04:28:45.142188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.744 BaseBdev2 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.744 [ 00:11:48.744 { 00:11:48.744 "name": "BaseBdev2", 00:11:48.744 "aliases": [ 00:11:48.744 "c0381075-03d3-4dbc-9bd5-7dbddf6096bb" 00:11:48.744 ], 00:11:48.744 "product_name": "Malloc disk", 00:11:48.744 "block_size": 512, 00:11:48.744 "num_blocks": 65536, 00:11:48.744 "uuid": "c0381075-03d3-4dbc-9bd5-7dbddf6096bb", 00:11:48.744 "assigned_rate_limits": { 00:11:48.744 "rw_ios_per_sec": 0, 00:11:48.744 "rw_mbytes_per_sec": 0, 00:11:48.744 "r_mbytes_per_sec": 0, 00:11:48.744 "w_mbytes_per_sec": 0 00:11:48.744 }, 00:11:48.744 "claimed": false, 00:11:48.744 "zoned": false, 00:11:48.744 "supported_io_types": { 00:11:48.744 "read": true, 00:11:48.744 "write": true, 00:11:48.744 "unmap": true, 00:11:48.744 "flush": true, 00:11:48.744 "reset": true, 00:11:48.744 "nvme_admin": false, 00:11:48.744 "nvme_io": false, 00:11:48.744 "nvme_io_md": false, 00:11:48.744 "write_zeroes": true, 00:11:48.744 "zcopy": true, 00:11:48.744 "get_zone_info": false, 00:11:48.744 "zone_management": false, 00:11:48.744 "zone_append": false, 00:11:48.744 "compare": false, 00:11:48.744 "compare_and_write": false, 00:11:48.744 "abort": true, 00:11:48.744 "seek_hole": false, 00:11:48.744 "seek_data": false, 00:11:48.744 "copy": true, 00:11:48.744 "nvme_iov_md": false 00:11:48.744 }, 00:11:48.744 "memory_domains": [ 00:11:48.744 { 00:11:48.744 "dma_device_id": "system", 00:11:48.744 "dma_device_type": 1 00:11:48.744 }, 00:11:48.744 { 00:11:48.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.744 "dma_device_type": 2 00:11:48.744 } 00:11:48.744 ], 00:11:48.744 "driver_specific": {} 00:11:48.744 } 00:11:48.744 ] 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.744 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.030 BaseBdev3 00:11:49.030 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.030 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:49.030 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:49.030 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:49.030 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:49.030 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:49.030 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:49.030 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:49.030 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.030 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.030 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.030 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:49.030 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.030 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.030 [ 00:11:49.030 { 00:11:49.030 "name": "BaseBdev3", 00:11:49.030 "aliases": [ 00:11:49.030 "a2b9aba0-8e1e-4854-9279-d2a95de9a305" 00:11:49.030 ], 00:11:49.030 "product_name": "Malloc disk", 00:11:49.030 "block_size": 512, 00:11:49.030 "num_blocks": 65536, 00:11:49.030 "uuid": "a2b9aba0-8e1e-4854-9279-d2a95de9a305", 00:11:49.030 "assigned_rate_limits": { 00:11:49.030 "rw_ios_per_sec": 0, 00:11:49.030 "rw_mbytes_per_sec": 0, 00:11:49.030 "r_mbytes_per_sec": 0, 00:11:49.030 "w_mbytes_per_sec": 0 00:11:49.030 }, 00:11:49.030 "claimed": false, 00:11:49.030 "zoned": false, 00:11:49.030 "supported_io_types": { 00:11:49.030 "read": true, 00:11:49.030 "write": true, 00:11:49.030 "unmap": true, 00:11:49.030 "flush": true, 00:11:49.030 "reset": true, 00:11:49.030 "nvme_admin": false, 00:11:49.030 "nvme_io": false, 00:11:49.030 "nvme_io_md": false, 00:11:49.030 "write_zeroes": true, 00:11:49.030 "zcopy": true, 00:11:49.030 "get_zone_info": false, 00:11:49.030 "zone_management": false, 00:11:49.030 "zone_append": false, 00:11:49.030 "compare": false, 00:11:49.030 "compare_and_write": false, 00:11:49.030 "abort": true, 00:11:49.030 "seek_hole": false, 00:11:49.030 "seek_data": false, 00:11:49.030 "copy": true, 00:11:49.030 "nvme_iov_md": false 00:11:49.030 }, 00:11:49.030 "memory_domains": [ 00:11:49.030 { 00:11:49.030 "dma_device_id": "system", 00:11:49.030 "dma_device_type": 1 00:11:49.030 }, 00:11:49.030 { 00:11:49.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.030 "dma_device_type": 2 00:11:49.030 } 00:11:49.030 ], 00:11:49.030 "driver_specific": {} 00:11:49.030 } 00:11:49.030 ] 00:11:49.030 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.030 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:49.030 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:49.030 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:49.030 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:49.030 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.030 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.030 [2024-11-27 04:28:45.379246] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:49.030 [2024-11-27 04:28:45.379459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:49.030 [2024-11-27 04:28:45.379524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:49.030 [2024-11-27 04:28:45.382041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:49.031 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.031 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:49.031 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.031 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.031 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.031 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.031 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:49.031 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.031 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.031 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.031 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.031 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.031 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.031 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.031 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.031 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.031 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.031 "name": "Existed_Raid", 00:11:49.031 "uuid": "92215cab-4784-4f63-8eff-e7b74b1dcbe2", 00:11:49.031 "strip_size_kb": 0, 00:11:49.031 "state": "configuring", 00:11:49.031 "raid_level": "raid1", 00:11:49.031 "superblock": true, 00:11:49.031 "num_base_bdevs": 3, 00:11:49.031 "num_base_bdevs_discovered": 2, 00:11:49.031 "num_base_bdevs_operational": 3, 00:11:49.031 "base_bdevs_list": [ 00:11:49.031 { 00:11:49.031 "name": "BaseBdev1", 00:11:49.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.031 "is_configured": false, 00:11:49.031 "data_offset": 0, 00:11:49.031 "data_size": 0 00:11:49.031 }, 00:11:49.031 { 00:11:49.031 "name": "BaseBdev2", 00:11:49.031 "uuid": "c0381075-03d3-4dbc-9bd5-7dbddf6096bb", 00:11:49.031 "is_configured": true, 00:11:49.031 "data_offset": 2048, 00:11:49.031 "data_size": 63488 00:11:49.031 }, 00:11:49.031 { 00:11:49.031 "name": "BaseBdev3", 00:11:49.031 "uuid": "a2b9aba0-8e1e-4854-9279-d2a95de9a305", 00:11:49.031 "is_configured": true, 00:11:49.031 "data_offset": 2048, 00:11:49.031 "data_size": 63488 00:11:49.031 } 00:11:49.031 ] 00:11:49.031 }' 00:11:49.031 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.031 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.297 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:49.297 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.297 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.297 [2024-11-27 04:28:45.870413] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:49.297 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.297 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:49.297 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.297 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.297 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.297 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.297 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:49.297 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.297 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.297 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.297 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.556 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.556 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.556 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.556 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.556 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.556 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.556 "name": "Existed_Raid", 00:11:49.556 "uuid": "92215cab-4784-4f63-8eff-e7b74b1dcbe2", 00:11:49.556 "strip_size_kb": 0, 00:11:49.556 "state": "configuring", 00:11:49.556 "raid_level": "raid1", 00:11:49.556 "superblock": true, 00:11:49.556 "num_base_bdevs": 3, 00:11:49.556 "num_base_bdevs_discovered": 1, 00:11:49.556 "num_base_bdevs_operational": 3, 00:11:49.556 "base_bdevs_list": [ 00:11:49.556 { 00:11:49.556 "name": "BaseBdev1", 00:11:49.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.556 "is_configured": false, 00:11:49.556 "data_offset": 0, 00:11:49.556 "data_size": 0 00:11:49.556 }, 00:11:49.556 { 00:11:49.556 "name": null, 00:11:49.556 "uuid": "c0381075-03d3-4dbc-9bd5-7dbddf6096bb", 00:11:49.556 "is_configured": false, 00:11:49.556 "data_offset": 0, 00:11:49.556 "data_size": 63488 00:11:49.556 }, 00:11:49.556 { 00:11:49.556 "name": "BaseBdev3", 00:11:49.556 "uuid": "a2b9aba0-8e1e-4854-9279-d2a95de9a305", 00:11:49.556 "is_configured": true, 00:11:49.556 "data_offset": 2048, 00:11:49.556 "data_size": 63488 00:11:49.556 } 00:11:49.556 ] 00:11:49.556 }' 00:11:49.556 04:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.556 04:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.815 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.815 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.815 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:49.815 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.815 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.816 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:49.816 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:49.816 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.816 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.076 [2024-11-27 04:28:46.434955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.076 BaseBdev1 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.076 [ 00:11:50.076 { 00:11:50.076 "name": "BaseBdev1", 00:11:50.076 "aliases": [ 00:11:50.076 "6ac7358e-4c7a-41d7-9ee0-eab3f5e347ca" 00:11:50.076 ], 00:11:50.076 "product_name": "Malloc disk", 00:11:50.076 "block_size": 512, 00:11:50.076 "num_blocks": 65536, 00:11:50.076 "uuid": "6ac7358e-4c7a-41d7-9ee0-eab3f5e347ca", 00:11:50.076 "assigned_rate_limits": { 00:11:50.076 "rw_ios_per_sec": 0, 00:11:50.076 "rw_mbytes_per_sec": 0, 00:11:50.076 "r_mbytes_per_sec": 0, 00:11:50.076 "w_mbytes_per_sec": 0 00:11:50.076 }, 00:11:50.076 "claimed": true, 00:11:50.076 "claim_type": "exclusive_write", 00:11:50.076 "zoned": false, 00:11:50.076 "supported_io_types": { 00:11:50.076 "read": true, 00:11:50.076 "write": true, 00:11:50.076 "unmap": true, 00:11:50.076 "flush": true, 00:11:50.076 "reset": true, 00:11:50.076 "nvme_admin": false, 00:11:50.076 "nvme_io": false, 00:11:50.076 "nvme_io_md": false, 00:11:50.076 "write_zeroes": true, 00:11:50.076 "zcopy": true, 00:11:50.076 "get_zone_info": false, 00:11:50.076 "zone_management": false, 00:11:50.076 "zone_append": false, 00:11:50.076 "compare": false, 00:11:50.076 "compare_and_write": false, 00:11:50.076 "abort": true, 00:11:50.076 "seek_hole": false, 00:11:50.076 "seek_data": false, 00:11:50.076 "copy": true, 00:11:50.076 "nvme_iov_md": false 00:11:50.076 }, 00:11:50.076 "memory_domains": [ 00:11:50.076 { 00:11:50.076 "dma_device_id": "system", 00:11:50.076 "dma_device_type": 1 00:11:50.076 }, 00:11:50.076 { 00:11:50.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.076 "dma_device_type": 2 00:11:50.076 } 00:11:50.076 ], 00:11:50.076 "driver_specific": {} 00:11:50.076 } 00:11:50.076 ] 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.076 "name": "Existed_Raid", 00:11:50.076 "uuid": "92215cab-4784-4f63-8eff-e7b74b1dcbe2", 00:11:50.076 "strip_size_kb": 0, 00:11:50.076 "state": "configuring", 00:11:50.076 "raid_level": "raid1", 00:11:50.076 "superblock": true, 00:11:50.076 "num_base_bdevs": 3, 00:11:50.076 "num_base_bdevs_discovered": 2, 00:11:50.076 "num_base_bdevs_operational": 3, 00:11:50.076 "base_bdevs_list": [ 00:11:50.076 { 00:11:50.076 "name": "BaseBdev1", 00:11:50.076 "uuid": "6ac7358e-4c7a-41d7-9ee0-eab3f5e347ca", 00:11:50.076 "is_configured": true, 00:11:50.076 "data_offset": 2048, 00:11:50.076 "data_size": 63488 00:11:50.076 }, 00:11:50.076 { 00:11:50.076 "name": null, 00:11:50.076 "uuid": "c0381075-03d3-4dbc-9bd5-7dbddf6096bb", 00:11:50.076 "is_configured": false, 00:11:50.076 "data_offset": 0, 00:11:50.076 "data_size": 63488 00:11:50.076 }, 00:11:50.076 { 00:11:50.076 "name": "BaseBdev3", 00:11:50.076 "uuid": "a2b9aba0-8e1e-4854-9279-d2a95de9a305", 00:11:50.076 "is_configured": true, 00:11:50.076 "data_offset": 2048, 00:11:50.076 "data_size": 63488 00:11:50.076 } 00:11:50.076 ] 00:11:50.076 }' 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.076 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.336 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.336 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:50.336 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.336 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.596 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.596 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:50.596 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:50.596 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.596 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.596 [2024-11-27 04:28:46.962192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:50.596 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.596 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:50.596 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.596 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.596 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.596 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.596 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:50.596 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.596 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.596 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.596 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.596 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.596 04:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.596 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.596 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.596 04:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.596 04:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.596 "name": "Existed_Raid", 00:11:50.596 "uuid": "92215cab-4784-4f63-8eff-e7b74b1dcbe2", 00:11:50.596 "strip_size_kb": 0, 00:11:50.596 "state": "configuring", 00:11:50.596 "raid_level": "raid1", 00:11:50.596 "superblock": true, 00:11:50.596 "num_base_bdevs": 3, 00:11:50.596 "num_base_bdevs_discovered": 1, 00:11:50.596 "num_base_bdevs_operational": 3, 00:11:50.596 "base_bdevs_list": [ 00:11:50.596 { 00:11:50.596 "name": "BaseBdev1", 00:11:50.596 "uuid": "6ac7358e-4c7a-41d7-9ee0-eab3f5e347ca", 00:11:50.596 "is_configured": true, 00:11:50.596 "data_offset": 2048, 00:11:50.596 "data_size": 63488 00:11:50.596 }, 00:11:50.596 { 00:11:50.596 "name": null, 00:11:50.596 "uuid": "c0381075-03d3-4dbc-9bd5-7dbddf6096bb", 00:11:50.596 "is_configured": false, 00:11:50.596 "data_offset": 0, 00:11:50.596 "data_size": 63488 00:11:50.596 }, 00:11:50.596 { 00:11:50.596 "name": null, 00:11:50.596 "uuid": "a2b9aba0-8e1e-4854-9279-d2a95de9a305", 00:11:50.596 "is_configured": false, 00:11:50.596 "data_offset": 0, 00:11:50.596 "data_size": 63488 00:11:50.596 } 00:11:50.596 ] 00:11:50.596 }' 00:11:50.596 04:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.596 04:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.856 04:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.856 04:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.856 04:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.856 04:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:51.115 04:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.115 04:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:51.115 04:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:51.115 04:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.115 04:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.115 [2024-11-27 04:28:47.489397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:51.115 04:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.115 04:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:51.115 04:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.115 04:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.115 04:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.115 04:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.115 04:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.115 04:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.115 04:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.115 04:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.115 04:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.115 04:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.115 04:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.115 04:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.115 04:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.115 04:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.115 04:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.115 "name": "Existed_Raid", 00:11:51.115 "uuid": "92215cab-4784-4f63-8eff-e7b74b1dcbe2", 00:11:51.115 "strip_size_kb": 0, 00:11:51.115 "state": "configuring", 00:11:51.115 "raid_level": "raid1", 00:11:51.115 "superblock": true, 00:11:51.115 "num_base_bdevs": 3, 00:11:51.115 "num_base_bdevs_discovered": 2, 00:11:51.115 "num_base_bdevs_operational": 3, 00:11:51.115 "base_bdevs_list": [ 00:11:51.115 { 00:11:51.115 "name": "BaseBdev1", 00:11:51.115 "uuid": "6ac7358e-4c7a-41d7-9ee0-eab3f5e347ca", 00:11:51.115 "is_configured": true, 00:11:51.115 "data_offset": 2048, 00:11:51.115 "data_size": 63488 00:11:51.115 }, 00:11:51.115 { 00:11:51.115 "name": null, 00:11:51.115 "uuid": "c0381075-03d3-4dbc-9bd5-7dbddf6096bb", 00:11:51.115 "is_configured": false, 00:11:51.115 "data_offset": 0, 00:11:51.116 "data_size": 63488 00:11:51.116 }, 00:11:51.116 { 00:11:51.116 "name": "BaseBdev3", 00:11:51.116 "uuid": "a2b9aba0-8e1e-4854-9279-d2a95de9a305", 00:11:51.116 "is_configured": true, 00:11:51.116 "data_offset": 2048, 00:11:51.116 "data_size": 63488 00:11:51.116 } 00:11:51.116 ] 00:11:51.116 }' 00:11:51.116 04:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.116 04:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.375 04:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.375 04:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:51.375 04:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.375 04:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.375 04:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.634 04:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:51.635 04:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:51.635 04:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.635 04:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.635 [2024-11-27 04:28:47.980583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:51.635 04:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.635 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:51.635 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.635 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.635 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.635 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.635 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.635 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.635 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.635 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.635 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.635 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.635 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.635 04:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.635 04:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.635 04:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.635 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.635 "name": "Existed_Raid", 00:11:51.635 "uuid": "92215cab-4784-4f63-8eff-e7b74b1dcbe2", 00:11:51.635 "strip_size_kb": 0, 00:11:51.635 "state": "configuring", 00:11:51.635 "raid_level": "raid1", 00:11:51.635 "superblock": true, 00:11:51.635 "num_base_bdevs": 3, 00:11:51.635 "num_base_bdevs_discovered": 1, 00:11:51.635 "num_base_bdevs_operational": 3, 00:11:51.635 "base_bdevs_list": [ 00:11:51.635 { 00:11:51.635 "name": null, 00:11:51.635 "uuid": "6ac7358e-4c7a-41d7-9ee0-eab3f5e347ca", 00:11:51.635 "is_configured": false, 00:11:51.635 "data_offset": 0, 00:11:51.635 "data_size": 63488 00:11:51.635 }, 00:11:51.635 { 00:11:51.635 "name": null, 00:11:51.635 "uuid": "c0381075-03d3-4dbc-9bd5-7dbddf6096bb", 00:11:51.635 "is_configured": false, 00:11:51.635 "data_offset": 0, 00:11:51.635 "data_size": 63488 00:11:51.635 }, 00:11:51.635 { 00:11:51.635 "name": "BaseBdev3", 00:11:51.635 "uuid": "a2b9aba0-8e1e-4854-9279-d2a95de9a305", 00:11:51.635 "is_configured": true, 00:11:51.635 "data_offset": 2048, 00:11:51.635 "data_size": 63488 00:11:51.635 } 00:11:51.635 ] 00:11:51.635 }' 00:11:51.635 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.635 04:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.204 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.204 04:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.204 04:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.204 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:52.204 04:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.204 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:52.204 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:52.204 04:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.204 04:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.204 [2024-11-27 04:28:48.620813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.204 04:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.204 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:52.204 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.204 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.204 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.204 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.204 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.204 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.205 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.205 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.205 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.205 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.205 04:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.205 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.205 04:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.205 04:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.205 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.205 "name": "Existed_Raid", 00:11:52.205 "uuid": "92215cab-4784-4f63-8eff-e7b74b1dcbe2", 00:11:52.205 "strip_size_kb": 0, 00:11:52.205 "state": "configuring", 00:11:52.205 "raid_level": "raid1", 00:11:52.205 "superblock": true, 00:11:52.205 "num_base_bdevs": 3, 00:11:52.205 "num_base_bdevs_discovered": 2, 00:11:52.205 "num_base_bdevs_operational": 3, 00:11:52.205 "base_bdevs_list": [ 00:11:52.205 { 00:11:52.205 "name": null, 00:11:52.205 "uuid": "6ac7358e-4c7a-41d7-9ee0-eab3f5e347ca", 00:11:52.205 "is_configured": false, 00:11:52.205 "data_offset": 0, 00:11:52.205 "data_size": 63488 00:11:52.205 }, 00:11:52.205 { 00:11:52.205 "name": "BaseBdev2", 00:11:52.205 "uuid": "c0381075-03d3-4dbc-9bd5-7dbddf6096bb", 00:11:52.205 "is_configured": true, 00:11:52.205 "data_offset": 2048, 00:11:52.205 "data_size": 63488 00:11:52.205 }, 00:11:52.205 { 00:11:52.205 "name": "BaseBdev3", 00:11:52.205 "uuid": "a2b9aba0-8e1e-4854-9279-d2a95de9a305", 00:11:52.205 "is_configured": true, 00:11:52.205 "data_offset": 2048, 00:11:52.205 "data_size": 63488 00:11:52.205 } 00:11:52.205 ] 00:11:52.205 }' 00:11:52.205 04:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.205 04:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6ac7358e-4c7a-41d7-9ee0-eab3f5e347ca 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.774 [2024-11-27 04:28:49.210707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:52.774 [2024-11-27 04:28:49.211173] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:52.774 [2024-11-27 04:28:49.211195] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:52.774 [2024-11-27 04:28:49.211534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:52.774 NewBaseBdev 00:11:52.774 [2024-11-27 04:28:49.211716] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:52.774 [2024-11-27 04:28:49.211731] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:52.774 [2024-11-27 04:28:49.211907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.774 [ 00:11:52.774 { 00:11:52.774 "name": "NewBaseBdev", 00:11:52.774 "aliases": [ 00:11:52.774 "6ac7358e-4c7a-41d7-9ee0-eab3f5e347ca" 00:11:52.774 ], 00:11:52.774 "product_name": "Malloc disk", 00:11:52.774 "block_size": 512, 00:11:52.774 "num_blocks": 65536, 00:11:52.774 "uuid": "6ac7358e-4c7a-41d7-9ee0-eab3f5e347ca", 00:11:52.774 "assigned_rate_limits": { 00:11:52.774 "rw_ios_per_sec": 0, 00:11:52.774 "rw_mbytes_per_sec": 0, 00:11:52.774 "r_mbytes_per_sec": 0, 00:11:52.774 "w_mbytes_per_sec": 0 00:11:52.774 }, 00:11:52.774 "claimed": true, 00:11:52.774 "claim_type": "exclusive_write", 00:11:52.774 "zoned": false, 00:11:52.774 "supported_io_types": { 00:11:52.774 "read": true, 00:11:52.774 "write": true, 00:11:52.774 "unmap": true, 00:11:52.774 "flush": true, 00:11:52.774 "reset": true, 00:11:52.774 "nvme_admin": false, 00:11:52.774 "nvme_io": false, 00:11:52.774 "nvme_io_md": false, 00:11:52.774 "write_zeroes": true, 00:11:52.774 "zcopy": true, 00:11:52.774 "get_zone_info": false, 00:11:52.774 "zone_management": false, 00:11:52.774 "zone_append": false, 00:11:52.774 "compare": false, 00:11:52.774 "compare_and_write": false, 00:11:52.774 "abort": true, 00:11:52.774 "seek_hole": false, 00:11:52.774 "seek_data": false, 00:11:52.774 "copy": true, 00:11:52.774 "nvme_iov_md": false 00:11:52.774 }, 00:11:52.774 "memory_domains": [ 00:11:52.774 { 00:11:52.774 "dma_device_id": "system", 00:11:52.774 "dma_device_type": 1 00:11:52.774 }, 00:11:52.774 { 00:11:52.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.774 "dma_device_type": 2 00:11:52.774 } 00:11:52.774 ], 00:11:52.774 "driver_specific": {} 00:11:52.774 } 00:11:52.774 ] 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.774 "name": "Existed_Raid", 00:11:52.774 "uuid": "92215cab-4784-4f63-8eff-e7b74b1dcbe2", 00:11:52.774 "strip_size_kb": 0, 00:11:52.774 "state": "online", 00:11:52.774 "raid_level": "raid1", 00:11:52.774 "superblock": true, 00:11:52.774 "num_base_bdevs": 3, 00:11:52.774 "num_base_bdevs_discovered": 3, 00:11:52.774 "num_base_bdevs_operational": 3, 00:11:52.774 "base_bdevs_list": [ 00:11:52.774 { 00:11:52.774 "name": "NewBaseBdev", 00:11:52.774 "uuid": "6ac7358e-4c7a-41d7-9ee0-eab3f5e347ca", 00:11:52.774 "is_configured": true, 00:11:52.774 "data_offset": 2048, 00:11:52.774 "data_size": 63488 00:11:52.774 }, 00:11:52.774 { 00:11:52.774 "name": "BaseBdev2", 00:11:52.774 "uuid": "c0381075-03d3-4dbc-9bd5-7dbddf6096bb", 00:11:52.774 "is_configured": true, 00:11:52.774 "data_offset": 2048, 00:11:52.774 "data_size": 63488 00:11:52.774 }, 00:11:52.774 { 00:11:52.774 "name": "BaseBdev3", 00:11:52.774 "uuid": "a2b9aba0-8e1e-4854-9279-d2a95de9a305", 00:11:52.774 "is_configured": true, 00:11:52.774 "data_offset": 2048, 00:11:52.774 "data_size": 63488 00:11:52.774 } 00:11:52.774 ] 00:11:52.774 }' 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.774 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.384 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:53.384 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:53.384 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:53.384 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:53.384 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:53.384 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:53.384 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:53.384 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:53.384 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.384 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.384 [2024-11-27 04:28:49.730438] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:53.384 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.384 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:53.384 "name": "Existed_Raid", 00:11:53.384 "aliases": [ 00:11:53.384 "92215cab-4784-4f63-8eff-e7b74b1dcbe2" 00:11:53.384 ], 00:11:53.384 "product_name": "Raid Volume", 00:11:53.384 "block_size": 512, 00:11:53.384 "num_blocks": 63488, 00:11:53.384 "uuid": "92215cab-4784-4f63-8eff-e7b74b1dcbe2", 00:11:53.384 "assigned_rate_limits": { 00:11:53.384 "rw_ios_per_sec": 0, 00:11:53.384 "rw_mbytes_per_sec": 0, 00:11:53.384 "r_mbytes_per_sec": 0, 00:11:53.384 "w_mbytes_per_sec": 0 00:11:53.384 }, 00:11:53.384 "claimed": false, 00:11:53.384 "zoned": false, 00:11:53.384 "supported_io_types": { 00:11:53.384 "read": true, 00:11:53.384 "write": true, 00:11:53.384 "unmap": false, 00:11:53.384 "flush": false, 00:11:53.384 "reset": true, 00:11:53.384 "nvme_admin": false, 00:11:53.384 "nvme_io": false, 00:11:53.384 "nvme_io_md": false, 00:11:53.384 "write_zeroes": true, 00:11:53.384 "zcopy": false, 00:11:53.384 "get_zone_info": false, 00:11:53.384 "zone_management": false, 00:11:53.384 "zone_append": false, 00:11:53.384 "compare": false, 00:11:53.384 "compare_and_write": false, 00:11:53.384 "abort": false, 00:11:53.384 "seek_hole": false, 00:11:53.384 "seek_data": false, 00:11:53.384 "copy": false, 00:11:53.384 "nvme_iov_md": false 00:11:53.384 }, 00:11:53.384 "memory_domains": [ 00:11:53.384 { 00:11:53.384 "dma_device_id": "system", 00:11:53.384 "dma_device_type": 1 00:11:53.384 }, 00:11:53.384 { 00:11:53.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.384 "dma_device_type": 2 00:11:53.384 }, 00:11:53.384 { 00:11:53.384 "dma_device_id": "system", 00:11:53.384 "dma_device_type": 1 00:11:53.384 }, 00:11:53.384 { 00:11:53.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.384 "dma_device_type": 2 00:11:53.384 }, 00:11:53.384 { 00:11:53.384 "dma_device_id": "system", 00:11:53.384 "dma_device_type": 1 00:11:53.384 }, 00:11:53.384 { 00:11:53.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.384 "dma_device_type": 2 00:11:53.384 } 00:11:53.384 ], 00:11:53.384 "driver_specific": { 00:11:53.384 "raid": { 00:11:53.384 "uuid": "92215cab-4784-4f63-8eff-e7b74b1dcbe2", 00:11:53.384 "strip_size_kb": 0, 00:11:53.384 "state": "online", 00:11:53.384 "raid_level": "raid1", 00:11:53.384 "superblock": true, 00:11:53.384 "num_base_bdevs": 3, 00:11:53.384 "num_base_bdevs_discovered": 3, 00:11:53.384 "num_base_bdevs_operational": 3, 00:11:53.384 "base_bdevs_list": [ 00:11:53.384 { 00:11:53.384 "name": "NewBaseBdev", 00:11:53.384 "uuid": "6ac7358e-4c7a-41d7-9ee0-eab3f5e347ca", 00:11:53.384 "is_configured": true, 00:11:53.384 "data_offset": 2048, 00:11:53.384 "data_size": 63488 00:11:53.384 }, 00:11:53.384 { 00:11:53.384 "name": "BaseBdev2", 00:11:53.384 "uuid": "c0381075-03d3-4dbc-9bd5-7dbddf6096bb", 00:11:53.384 "is_configured": true, 00:11:53.384 "data_offset": 2048, 00:11:53.384 "data_size": 63488 00:11:53.384 }, 00:11:53.384 { 00:11:53.384 "name": "BaseBdev3", 00:11:53.384 "uuid": "a2b9aba0-8e1e-4854-9279-d2a95de9a305", 00:11:53.384 "is_configured": true, 00:11:53.384 "data_offset": 2048, 00:11:53.384 "data_size": 63488 00:11:53.384 } 00:11:53.384 ] 00:11:53.384 } 00:11:53.384 } 00:11:53.384 }' 00:11:53.384 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:53.384 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:53.384 BaseBdev2 00:11:53.384 BaseBdev3' 00:11:53.384 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.384 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:53.384 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.384 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:53.384 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.384 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.385 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.385 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.385 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.385 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.385 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.385 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:53.385 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.385 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.385 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.385 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.385 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.385 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.385 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.385 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:53.385 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.385 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.385 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.645 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.645 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.645 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.645 04:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:53.645 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.645 04:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.645 [2024-11-27 04:28:50.001553] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:53.645 [2024-11-27 04:28:50.001614] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:53.645 [2024-11-27 04:28:50.001733] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.645 [2024-11-27 04:28:50.002116] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:53.645 [2024-11-27 04:28:50.002136] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:53.645 04:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.645 04:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68270 00:11:53.645 04:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68270 ']' 00:11:53.645 04:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68270 00:11:53.645 04:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:53.645 04:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:53.645 04:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68270 00:11:53.645 killing process with pid 68270 00:11:53.645 04:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:53.645 04:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:53.645 04:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68270' 00:11:53.645 04:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68270 00:11:53.645 [2024-11-27 04:28:50.041835] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:53.645 04:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68270 00:11:53.905 [2024-11-27 04:28:50.400227] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:55.285 04:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:55.285 00:11:55.285 real 0m11.453s 00:11:55.285 user 0m18.016s 00:11:55.285 sys 0m1.831s 00:11:55.285 04:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.285 04:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.285 ************************************ 00:11:55.285 END TEST raid_state_function_test_sb 00:11:55.285 ************************************ 00:11:55.285 04:28:51 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:11:55.285 04:28:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:55.285 04:28:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.285 04:28:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:55.285 ************************************ 00:11:55.285 START TEST raid_superblock_test 00:11:55.285 ************************************ 00:11:55.285 04:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:11:55.285 04:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:55.285 04:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:55.285 04:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:55.285 04:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:55.285 04:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:55.285 04:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:55.285 04:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:55.285 04:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:55.285 04:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:55.285 04:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:55.285 04:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:55.285 04:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:55.285 04:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:55.285 04:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:55.285 04:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:55.285 04:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68902 00:11:55.285 04:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:55.285 04:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68902 00:11:55.285 04:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68902 ']' 00:11:55.285 04:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.285 04:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.285 04:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.285 04:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.285 04:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.545 [2024-11-27 04:28:51.892268] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:55.545 [2024-11-27 04:28:51.892410] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68902 ] 00:11:55.545 [2024-11-27 04:28:52.063354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.804 [2024-11-27 04:28:52.218664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.064 [2024-11-27 04:28:52.486181] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.064 [2024-11-27 04:28:52.486240] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.322 04:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.322 04:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:56.322 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:56.322 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:56.322 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:56.322 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:56.322 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:56.322 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:56.322 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:56.323 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:56.323 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:56.323 04:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.323 04:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.323 malloc1 00:11:56.323 04:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.323 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:56.323 04:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.323 04:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.323 [2024-11-27 04:28:52.865250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:56.323 [2024-11-27 04:28:52.865337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.323 [2024-11-27 04:28:52.865366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:56.323 [2024-11-27 04:28:52.865377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.323 [2024-11-27 04:28:52.868335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.323 [2024-11-27 04:28:52.868383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:56.323 pt1 00:11:56.323 04:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.323 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:56.323 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:56.323 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:56.323 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:56.323 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:56.323 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:56.323 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:56.323 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:56.323 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:56.323 04:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.323 04:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.583 malloc2 00:11:56.583 04:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.583 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:56.583 04:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.583 04:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.583 [2024-11-27 04:28:52.938361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:56.583 [2024-11-27 04:28:52.938443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.583 [2024-11-27 04:28:52.938477] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:56.583 [2024-11-27 04:28:52.938489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.583 [2024-11-27 04:28:52.941252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.583 [2024-11-27 04:28:52.941292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:56.583 pt2 00:11:56.583 04:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.583 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:56.583 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:56.583 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:56.583 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:56.583 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:56.583 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:56.583 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:56.583 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:56.583 04:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:56.583 04:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.583 04:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.583 malloc3 00:11:56.583 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.583 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:56.583 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.583 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.583 [2024-11-27 04:28:53.019201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:56.583 [2024-11-27 04:28:53.019406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.583 [2024-11-27 04:28:53.019481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:56.583 [2024-11-27 04:28:53.019558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.583 [2024-11-27 04:28:53.022528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.584 [2024-11-27 04:28:53.022633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:56.584 pt3 00:11:56.584 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.584 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:56.584 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:56.584 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:56.584 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.584 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.584 [2024-11-27 04:28:53.031493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:56.584 [2024-11-27 04:28:53.033960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:56.584 [2024-11-27 04:28:53.034108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:56.584 [2024-11-27 04:28:53.034369] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:56.584 [2024-11-27 04:28:53.034432] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:56.584 [2024-11-27 04:28:53.034796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:56.584 [2024-11-27 04:28:53.035065] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:56.584 [2024-11-27 04:28:53.035140] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:56.584 [2024-11-27 04:28:53.035441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.584 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.584 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:56.584 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.584 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.584 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.584 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.584 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:56.584 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.584 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.584 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.584 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.584 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.584 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.584 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.584 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.584 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.584 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.584 "name": "raid_bdev1", 00:11:56.584 "uuid": "0b36167a-029b-4551-bb05-1b5f319e118a", 00:11:56.584 "strip_size_kb": 0, 00:11:56.584 "state": "online", 00:11:56.584 "raid_level": "raid1", 00:11:56.584 "superblock": true, 00:11:56.584 "num_base_bdevs": 3, 00:11:56.584 "num_base_bdevs_discovered": 3, 00:11:56.584 "num_base_bdevs_operational": 3, 00:11:56.584 "base_bdevs_list": [ 00:11:56.584 { 00:11:56.584 "name": "pt1", 00:11:56.584 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:56.584 "is_configured": true, 00:11:56.584 "data_offset": 2048, 00:11:56.584 "data_size": 63488 00:11:56.584 }, 00:11:56.584 { 00:11:56.584 "name": "pt2", 00:11:56.584 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:56.584 "is_configured": true, 00:11:56.584 "data_offset": 2048, 00:11:56.584 "data_size": 63488 00:11:56.584 }, 00:11:56.584 { 00:11:56.584 "name": "pt3", 00:11:56.584 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:56.584 "is_configured": true, 00:11:56.584 "data_offset": 2048, 00:11:56.584 "data_size": 63488 00:11:56.584 } 00:11:56.584 ] 00:11:56.584 }' 00:11:56.584 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.584 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.167 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:57.167 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:57.167 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:57.167 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:57.167 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:57.167 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:57.167 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:57.167 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.167 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.167 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:57.167 [2024-11-27 04:28:53.535177] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.167 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.167 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:57.167 "name": "raid_bdev1", 00:11:57.167 "aliases": [ 00:11:57.167 "0b36167a-029b-4551-bb05-1b5f319e118a" 00:11:57.167 ], 00:11:57.167 "product_name": "Raid Volume", 00:11:57.167 "block_size": 512, 00:11:57.167 "num_blocks": 63488, 00:11:57.167 "uuid": "0b36167a-029b-4551-bb05-1b5f319e118a", 00:11:57.167 "assigned_rate_limits": { 00:11:57.167 "rw_ios_per_sec": 0, 00:11:57.167 "rw_mbytes_per_sec": 0, 00:11:57.167 "r_mbytes_per_sec": 0, 00:11:57.167 "w_mbytes_per_sec": 0 00:11:57.167 }, 00:11:57.167 "claimed": false, 00:11:57.167 "zoned": false, 00:11:57.167 "supported_io_types": { 00:11:57.167 "read": true, 00:11:57.167 "write": true, 00:11:57.167 "unmap": false, 00:11:57.167 "flush": false, 00:11:57.167 "reset": true, 00:11:57.167 "nvme_admin": false, 00:11:57.167 "nvme_io": false, 00:11:57.167 "nvme_io_md": false, 00:11:57.167 "write_zeroes": true, 00:11:57.167 "zcopy": false, 00:11:57.167 "get_zone_info": false, 00:11:57.167 "zone_management": false, 00:11:57.168 "zone_append": false, 00:11:57.168 "compare": false, 00:11:57.168 "compare_and_write": false, 00:11:57.168 "abort": false, 00:11:57.168 "seek_hole": false, 00:11:57.168 "seek_data": false, 00:11:57.168 "copy": false, 00:11:57.168 "nvme_iov_md": false 00:11:57.168 }, 00:11:57.168 "memory_domains": [ 00:11:57.168 { 00:11:57.168 "dma_device_id": "system", 00:11:57.168 "dma_device_type": 1 00:11:57.168 }, 00:11:57.168 { 00:11:57.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.168 "dma_device_type": 2 00:11:57.168 }, 00:11:57.168 { 00:11:57.168 "dma_device_id": "system", 00:11:57.168 "dma_device_type": 1 00:11:57.168 }, 00:11:57.168 { 00:11:57.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.168 "dma_device_type": 2 00:11:57.168 }, 00:11:57.168 { 00:11:57.168 "dma_device_id": "system", 00:11:57.168 "dma_device_type": 1 00:11:57.168 }, 00:11:57.168 { 00:11:57.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.168 "dma_device_type": 2 00:11:57.168 } 00:11:57.168 ], 00:11:57.168 "driver_specific": { 00:11:57.168 "raid": { 00:11:57.168 "uuid": "0b36167a-029b-4551-bb05-1b5f319e118a", 00:11:57.168 "strip_size_kb": 0, 00:11:57.168 "state": "online", 00:11:57.168 "raid_level": "raid1", 00:11:57.168 "superblock": true, 00:11:57.168 "num_base_bdevs": 3, 00:11:57.168 "num_base_bdevs_discovered": 3, 00:11:57.168 "num_base_bdevs_operational": 3, 00:11:57.168 "base_bdevs_list": [ 00:11:57.168 { 00:11:57.168 "name": "pt1", 00:11:57.168 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:57.168 "is_configured": true, 00:11:57.168 "data_offset": 2048, 00:11:57.168 "data_size": 63488 00:11:57.168 }, 00:11:57.168 { 00:11:57.168 "name": "pt2", 00:11:57.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:57.168 "is_configured": true, 00:11:57.168 "data_offset": 2048, 00:11:57.168 "data_size": 63488 00:11:57.168 }, 00:11:57.168 { 00:11:57.168 "name": "pt3", 00:11:57.168 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:57.168 "is_configured": true, 00:11:57.168 "data_offset": 2048, 00:11:57.168 "data_size": 63488 00:11:57.168 } 00:11:57.168 ] 00:11:57.168 } 00:11:57.168 } 00:11:57.168 }' 00:11:57.168 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:57.168 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:57.168 pt2 00:11:57.168 pt3' 00:11:57.168 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.168 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:57.168 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.168 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.168 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:57.168 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.168 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.168 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.168 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.168 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.168 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.168 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.168 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:57.168 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.168 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.168 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:57.428 [2024-11-27 04:28:53.838594] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0b36167a-029b-4551-bb05-1b5f319e118a 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0b36167a-029b-4551-bb05-1b5f319e118a ']' 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.428 [2024-11-27 04:28:53.886131] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:57.428 [2024-11-27 04:28:53.886268] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.428 [2024-11-27 04:28:53.886420] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.428 [2024-11-27 04:28:53.886560] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.428 [2024-11-27 04:28:53.886611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:57.428 04:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.687 [2024-11-27 04:28:54.033939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:57.687 [2024-11-27 04:28:54.036601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:57.687 [2024-11-27 04:28:54.036746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:57.687 [2024-11-27 04:28:54.036878] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:57.687 [2024-11-27 04:28:54.037002] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:57.687 [2024-11-27 04:28:54.037097] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:57.687 [2024-11-27 04:28:54.037189] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:57.687 [2024-11-27 04:28:54.037243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:57.687 request: 00:11:57.687 { 00:11:57.687 "name": "raid_bdev1", 00:11:57.687 "raid_level": "raid1", 00:11:57.687 "base_bdevs": [ 00:11:57.687 "malloc1", 00:11:57.687 "malloc2", 00:11:57.687 "malloc3" 00:11:57.687 ], 00:11:57.687 "superblock": false, 00:11:57.687 "method": "bdev_raid_create", 00:11:57.687 "req_id": 1 00:11:57.687 } 00:11:57.687 Got JSON-RPC error response 00:11:57.687 response: 00:11:57.687 { 00:11:57.687 "code": -17, 00:11:57.687 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:57.687 } 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.687 [2024-11-27 04:28:54.101800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:57.687 [2024-11-27 04:28:54.101906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.687 [2024-11-27 04:28:54.101940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:57.687 [2024-11-27 04:28:54.101957] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.687 [2024-11-27 04:28:54.104982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.687 [2024-11-27 04:28:54.105026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:57.687 [2024-11-27 04:28:54.105160] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:57.687 [2024-11-27 04:28:54.105230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:57.687 pt1 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.687 "name": "raid_bdev1", 00:11:57.687 "uuid": "0b36167a-029b-4551-bb05-1b5f319e118a", 00:11:57.687 "strip_size_kb": 0, 00:11:57.687 "state": "configuring", 00:11:57.687 "raid_level": "raid1", 00:11:57.687 "superblock": true, 00:11:57.687 "num_base_bdevs": 3, 00:11:57.687 "num_base_bdevs_discovered": 1, 00:11:57.687 "num_base_bdevs_operational": 3, 00:11:57.687 "base_bdevs_list": [ 00:11:57.687 { 00:11:57.687 "name": "pt1", 00:11:57.687 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:57.687 "is_configured": true, 00:11:57.687 "data_offset": 2048, 00:11:57.687 "data_size": 63488 00:11:57.687 }, 00:11:57.687 { 00:11:57.687 "name": null, 00:11:57.687 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:57.687 "is_configured": false, 00:11:57.687 "data_offset": 2048, 00:11:57.687 "data_size": 63488 00:11:57.687 }, 00:11:57.687 { 00:11:57.687 "name": null, 00:11:57.687 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:57.687 "is_configured": false, 00:11:57.687 "data_offset": 2048, 00:11:57.687 "data_size": 63488 00:11:57.687 } 00:11:57.687 ] 00:11:57.687 }' 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.687 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.256 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:58.256 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:58.256 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.256 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.256 [2024-11-27 04:28:54.577121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:58.256 [2024-11-27 04:28:54.577224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.256 [2024-11-27 04:28:54.577256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:58.256 [2024-11-27 04:28:54.577269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.256 [2024-11-27 04:28:54.577862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.256 [2024-11-27 04:28:54.577893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:58.256 [2024-11-27 04:28:54.578006] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:58.256 [2024-11-27 04:28:54.578041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:58.256 pt2 00:11:58.256 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.256 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:58.256 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.256 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.256 [2024-11-27 04:28:54.589143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:58.256 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.256 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:58.256 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.256 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.256 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.256 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.256 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.256 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.257 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.257 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.257 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.257 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.257 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.257 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.257 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.257 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.257 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.257 "name": "raid_bdev1", 00:11:58.257 "uuid": "0b36167a-029b-4551-bb05-1b5f319e118a", 00:11:58.257 "strip_size_kb": 0, 00:11:58.257 "state": "configuring", 00:11:58.257 "raid_level": "raid1", 00:11:58.257 "superblock": true, 00:11:58.257 "num_base_bdevs": 3, 00:11:58.257 "num_base_bdevs_discovered": 1, 00:11:58.257 "num_base_bdevs_operational": 3, 00:11:58.257 "base_bdevs_list": [ 00:11:58.257 { 00:11:58.257 "name": "pt1", 00:11:58.257 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:58.257 "is_configured": true, 00:11:58.257 "data_offset": 2048, 00:11:58.257 "data_size": 63488 00:11:58.257 }, 00:11:58.257 { 00:11:58.257 "name": null, 00:11:58.257 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:58.257 "is_configured": false, 00:11:58.257 "data_offset": 0, 00:11:58.257 "data_size": 63488 00:11:58.257 }, 00:11:58.257 { 00:11:58.257 "name": null, 00:11:58.257 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:58.257 "is_configured": false, 00:11:58.257 "data_offset": 2048, 00:11:58.257 "data_size": 63488 00:11:58.257 } 00:11:58.257 ] 00:11:58.257 }' 00:11:58.257 04:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.257 04:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.516 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:58.516 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:58.516 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:58.516 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.516 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.516 [2024-11-27 04:28:55.084268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:58.516 [2024-11-27 04:28:55.084392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.516 [2024-11-27 04:28:55.084420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:58.516 [2024-11-27 04:28:55.084436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.516 [2024-11-27 04:28:55.085086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.516 [2024-11-27 04:28:55.085157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:58.516 [2024-11-27 04:28:55.085282] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:58.516 [2024-11-27 04:28:55.085339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:58.516 pt2 00:11:58.516 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.516 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:58.516 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:58.516 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:58.516 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.516 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.516 [2024-11-27 04:28:55.092252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:58.516 [2024-11-27 04:28:55.092341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.516 [2024-11-27 04:28:55.092364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:58.516 [2024-11-27 04:28:55.092380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.516 [2024-11-27 04:28:55.093024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.516 [2024-11-27 04:28:55.093079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:58.516 [2024-11-27 04:28:55.093200] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:58.516 [2024-11-27 04:28:55.093235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:58.516 [2024-11-27 04:28:55.093427] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:58.516 [2024-11-27 04:28:55.093452] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:58.516 [2024-11-27 04:28:55.093791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:58.516 [2024-11-27 04:28:55.093990] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:58.516 [2024-11-27 04:28:55.094006] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:58.516 [2024-11-27 04:28:55.094203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.516 pt3 00:11:58.516 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.516 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:58.516 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:58.516 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:58.516 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.516 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.516 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.516 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.516 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.516 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.516 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.516 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.516 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.782 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.782 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.782 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.782 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.782 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.782 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.782 "name": "raid_bdev1", 00:11:58.782 "uuid": "0b36167a-029b-4551-bb05-1b5f319e118a", 00:11:58.782 "strip_size_kb": 0, 00:11:58.782 "state": "online", 00:11:58.782 "raid_level": "raid1", 00:11:58.782 "superblock": true, 00:11:58.782 "num_base_bdevs": 3, 00:11:58.782 "num_base_bdevs_discovered": 3, 00:11:58.782 "num_base_bdevs_operational": 3, 00:11:58.782 "base_bdevs_list": [ 00:11:58.782 { 00:11:58.782 "name": "pt1", 00:11:58.782 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:58.782 "is_configured": true, 00:11:58.782 "data_offset": 2048, 00:11:58.782 "data_size": 63488 00:11:58.782 }, 00:11:58.782 { 00:11:58.782 "name": "pt2", 00:11:58.782 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:58.782 "is_configured": true, 00:11:58.782 "data_offset": 2048, 00:11:58.782 "data_size": 63488 00:11:58.782 }, 00:11:58.782 { 00:11:58.782 "name": "pt3", 00:11:58.782 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:58.782 "is_configured": true, 00:11:58.782 "data_offset": 2048, 00:11:58.782 "data_size": 63488 00:11:58.782 } 00:11:58.782 ] 00:11:58.782 }' 00:11:58.782 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.782 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.041 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:59.041 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:59.041 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:59.041 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:59.041 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:59.041 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:59.041 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:59.041 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:59.041 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.041 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.041 [2024-11-27 04:28:55.551874] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:59.041 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.041 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:59.041 "name": "raid_bdev1", 00:11:59.041 "aliases": [ 00:11:59.041 "0b36167a-029b-4551-bb05-1b5f319e118a" 00:11:59.041 ], 00:11:59.041 "product_name": "Raid Volume", 00:11:59.041 "block_size": 512, 00:11:59.041 "num_blocks": 63488, 00:11:59.041 "uuid": "0b36167a-029b-4551-bb05-1b5f319e118a", 00:11:59.041 "assigned_rate_limits": { 00:11:59.041 "rw_ios_per_sec": 0, 00:11:59.041 "rw_mbytes_per_sec": 0, 00:11:59.041 "r_mbytes_per_sec": 0, 00:11:59.041 "w_mbytes_per_sec": 0 00:11:59.041 }, 00:11:59.041 "claimed": false, 00:11:59.041 "zoned": false, 00:11:59.041 "supported_io_types": { 00:11:59.041 "read": true, 00:11:59.041 "write": true, 00:11:59.041 "unmap": false, 00:11:59.041 "flush": false, 00:11:59.041 "reset": true, 00:11:59.041 "nvme_admin": false, 00:11:59.041 "nvme_io": false, 00:11:59.041 "nvme_io_md": false, 00:11:59.041 "write_zeroes": true, 00:11:59.041 "zcopy": false, 00:11:59.041 "get_zone_info": false, 00:11:59.041 "zone_management": false, 00:11:59.041 "zone_append": false, 00:11:59.041 "compare": false, 00:11:59.041 "compare_and_write": false, 00:11:59.041 "abort": false, 00:11:59.041 "seek_hole": false, 00:11:59.041 "seek_data": false, 00:11:59.041 "copy": false, 00:11:59.041 "nvme_iov_md": false 00:11:59.041 }, 00:11:59.041 "memory_domains": [ 00:11:59.041 { 00:11:59.041 "dma_device_id": "system", 00:11:59.041 "dma_device_type": 1 00:11:59.041 }, 00:11:59.041 { 00:11:59.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.041 "dma_device_type": 2 00:11:59.041 }, 00:11:59.041 { 00:11:59.041 "dma_device_id": "system", 00:11:59.041 "dma_device_type": 1 00:11:59.041 }, 00:11:59.041 { 00:11:59.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.041 "dma_device_type": 2 00:11:59.041 }, 00:11:59.041 { 00:11:59.041 "dma_device_id": "system", 00:11:59.041 "dma_device_type": 1 00:11:59.041 }, 00:11:59.041 { 00:11:59.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.041 "dma_device_type": 2 00:11:59.041 } 00:11:59.041 ], 00:11:59.041 "driver_specific": { 00:11:59.041 "raid": { 00:11:59.041 "uuid": "0b36167a-029b-4551-bb05-1b5f319e118a", 00:11:59.041 "strip_size_kb": 0, 00:11:59.041 "state": "online", 00:11:59.041 "raid_level": "raid1", 00:11:59.041 "superblock": true, 00:11:59.041 "num_base_bdevs": 3, 00:11:59.041 "num_base_bdevs_discovered": 3, 00:11:59.041 "num_base_bdevs_operational": 3, 00:11:59.041 "base_bdevs_list": [ 00:11:59.041 { 00:11:59.041 "name": "pt1", 00:11:59.041 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:59.041 "is_configured": true, 00:11:59.041 "data_offset": 2048, 00:11:59.041 "data_size": 63488 00:11:59.041 }, 00:11:59.041 { 00:11:59.041 "name": "pt2", 00:11:59.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:59.041 "is_configured": true, 00:11:59.041 "data_offset": 2048, 00:11:59.041 "data_size": 63488 00:11:59.041 }, 00:11:59.041 { 00:11:59.041 "name": "pt3", 00:11:59.041 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:59.041 "is_configured": true, 00:11:59.041 "data_offset": 2048, 00:11:59.041 "data_size": 63488 00:11:59.041 } 00:11:59.041 ] 00:11:59.041 } 00:11:59.041 } 00:11:59.041 }' 00:11:59.041 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:59.299 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:59.299 pt2 00:11:59.299 pt3' 00:11:59.299 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.299 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:59.299 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.300 [2024-11-27 04:28:55.847282] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0b36167a-029b-4551-bb05-1b5f319e118a '!=' 0b36167a-029b-4551-bb05-1b5f319e118a ']' 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.300 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.558 [2024-11-27 04:28:55.886950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:59.558 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.558 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:59.558 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.558 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.558 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.558 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.558 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:59.558 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.558 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.558 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.558 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.558 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.558 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.558 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.558 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.558 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.558 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.558 "name": "raid_bdev1", 00:11:59.558 "uuid": "0b36167a-029b-4551-bb05-1b5f319e118a", 00:11:59.558 "strip_size_kb": 0, 00:11:59.558 "state": "online", 00:11:59.558 "raid_level": "raid1", 00:11:59.558 "superblock": true, 00:11:59.558 "num_base_bdevs": 3, 00:11:59.558 "num_base_bdevs_discovered": 2, 00:11:59.558 "num_base_bdevs_operational": 2, 00:11:59.558 "base_bdevs_list": [ 00:11:59.558 { 00:11:59.558 "name": null, 00:11:59.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.558 "is_configured": false, 00:11:59.558 "data_offset": 0, 00:11:59.558 "data_size": 63488 00:11:59.558 }, 00:11:59.558 { 00:11:59.558 "name": "pt2", 00:11:59.558 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:59.558 "is_configured": true, 00:11:59.558 "data_offset": 2048, 00:11:59.558 "data_size": 63488 00:11:59.558 }, 00:11:59.558 { 00:11:59.558 "name": "pt3", 00:11:59.558 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:59.558 "is_configured": true, 00:11:59.558 "data_offset": 2048, 00:11:59.558 "data_size": 63488 00:11:59.558 } 00:11:59.558 ] 00:11:59.558 }' 00:11:59.558 04:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.558 04:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.816 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:59.816 04:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.816 04:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.816 [2024-11-27 04:28:56.366106] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:59.816 [2024-11-27 04:28:56.366145] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:59.816 [2024-11-27 04:28:56.366238] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:59.816 [2024-11-27 04:28:56.366311] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:59.816 [2024-11-27 04:28:56.366334] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:59.816 04:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.816 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.816 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:59.816 04:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.816 04:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.816 04:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.088 [2024-11-27 04:28:56.453911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:00.088 [2024-11-27 04:28:56.453984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.088 [2024-11-27 04:28:56.454005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:00.088 [2024-11-27 04:28:56.454018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.088 [2024-11-27 04:28:56.456566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.088 [2024-11-27 04:28:56.456617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:00.088 [2024-11-27 04:28:56.456710] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:00.088 [2024-11-27 04:28:56.456766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:00.088 pt2 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.088 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.089 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.089 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.089 04:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.089 04:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.089 04:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.089 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.089 "name": "raid_bdev1", 00:12:00.089 "uuid": "0b36167a-029b-4551-bb05-1b5f319e118a", 00:12:00.089 "strip_size_kb": 0, 00:12:00.089 "state": "configuring", 00:12:00.089 "raid_level": "raid1", 00:12:00.089 "superblock": true, 00:12:00.089 "num_base_bdevs": 3, 00:12:00.089 "num_base_bdevs_discovered": 1, 00:12:00.089 "num_base_bdevs_operational": 2, 00:12:00.089 "base_bdevs_list": [ 00:12:00.089 { 00:12:00.089 "name": null, 00:12:00.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.089 "is_configured": false, 00:12:00.089 "data_offset": 2048, 00:12:00.089 "data_size": 63488 00:12:00.089 }, 00:12:00.089 { 00:12:00.089 "name": "pt2", 00:12:00.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:00.089 "is_configured": true, 00:12:00.089 "data_offset": 2048, 00:12:00.089 "data_size": 63488 00:12:00.089 }, 00:12:00.089 { 00:12:00.089 "name": null, 00:12:00.089 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:00.089 "is_configured": false, 00:12:00.089 "data_offset": 2048, 00:12:00.089 "data_size": 63488 00:12:00.089 } 00:12:00.089 ] 00:12:00.089 }' 00:12:00.089 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.089 04:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.347 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:00.347 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:00.347 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:12:00.347 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:00.347 04:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.347 04:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.347 [2024-11-27 04:28:56.893208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:00.347 [2024-11-27 04:28:56.893286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.347 [2024-11-27 04:28:56.893309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:00.347 [2024-11-27 04:28:56.893338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.347 [2024-11-27 04:28:56.893850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.347 [2024-11-27 04:28:56.893883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:00.347 [2024-11-27 04:28:56.893984] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:00.347 [2024-11-27 04:28:56.894017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:00.347 [2024-11-27 04:28:56.894152] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:00.347 [2024-11-27 04:28:56.894174] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:00.348 [2024-11-27 04:28:56.894473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:00.348 [2024-11-27 04:28:56.894658] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:00.348 [2024-11-27 04:28:56.894677] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:00.348 [2024-11-27 04:28:56.894833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.348 pt3 00:12:00.348 04:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.348 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:00.348 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.348 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.348 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.348 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.348 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:00.348 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.348 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.348 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.348 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.348 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.348 04:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.348 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.348 04:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.348 04:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.606 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.606 "name": "raid_bdev1", 00:12:00.606 "uuid": "0b36167a-029b-4551-bb05-1b5f319e118a", 00:12:00.606 "strip_size_kb": 0, 00:12:00.606 "state": "online", 00:12:00.606 "raid_level": "raid1", 00:12:00.606 "superblock": true, 00:12:00.606 "num_base_bdevs": 3, 00:12:00.606 "num_base_bdevs_discovered": 2, 00:12:00.606 "num_base_bdevs_operational": 2, 00:12:00.606 "base_bdevs_list": [ 00:12:00.606 { 00:12:00.606 "name": null, 00:12:00.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.606 "is_configured": false, 00:12:00.606 "data_offset": 2048, 00:12:00.606 "data_size": 63488 00:12:00.606 }, 00:12:00.606 { 00:12:00.606 "name": "pt2", 00:12:00.606 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:00.606 "is_configured": true, 00:12:00.606 "data_offset": 2048, 00:12:00.606 "data_size": 63488 00:12:00.606 }, 00:12:00.606 { 00:12:00.606 "name": "pt3", 00:12:00.606 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:00.606 "is_configured": true, 00:12:00.606 "data_offset": 2048, 00:12:00.606 "data_size": 63488 00:12:00.606 } 00:12:00.606 ] 00:12:00.606 }' 00:12:00.606 04:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.606 04:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.864 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:00.865 04:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.865 04:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.865 [2024-11-27 04:28:57.392323] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:00.865 [2024-11-27 04:28:57.392362] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.865 [2024-11-27 04:28:57.392450] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.865 [2024-11-27 04:28:57.392516] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.865 [2024-11-27 04:28:57.392527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:00.865 04:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.865 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.865 04:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.865 04:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.865 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:00.865 04:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.865 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:00.865 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:00.865 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:12:00.865 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:12:00.865 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:12:00.865 04:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.865 04:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.124 04:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.124 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:01.124 04:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.124 04:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.124 [2024-11-27 04:28:57.464243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:01.124 [2024-11-27 04:28:57.464313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.124 [2024-11-27 04:28:57.464335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:01.124 [2024-11-27 04:28:57.464345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.124 [2024-11-27 04:28:57.466675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.124 [2024-11-27 04:28:57.466714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:01.124 [2024-11-27 04:28:57.466823] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:01.124 [2024-11-27 04:28:57.466873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:01.124 [2024-11-27 04:28:57.467022] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:01.124 [2024-11-27 04:28:57.467042] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:01.124 [2024-11-27 04:28:57.467059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:01.124 [2024-11-27 04:28:57.467136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:01.124 pt1 00:12:01.124 04:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.124 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:12:01.124 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:01.124 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.124 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.124 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.124 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.124 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:01.124 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.124 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.124 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.124 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.124 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.124 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.124 04:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.124 04:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.124 04:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.124 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.124 "name": "raid_bdev1", 00:12:01.124 "uuid": "0b36167a-029b-4551-bb05-1b5f319e118a", 00:12:01.124 "strip_size_kb": 0, 00:12:01.124 "state": "configuring", 00:12:01.124 "raid_level": "raid1", 00:12:01.124 "superblock": true, 00:12:01.124 "num_base_bdevs": 3, 00:12:01.124 "num_base_bdevs_discovered": 1, 00:12:01.124 "num_base_bdevs_operational": 2, 00:12:01.124 "base_bdevs_list": [ 00:12:01.124 { 00:12:01.124 "name": null, 00:12:01.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.124 "is_configured": false, 00:12:01.124 "data_offset": 2048, 00:12:01.124 "data_size": 63488 00:12:01.124 }, 00:12:01.124 { 00:12:01.124 "name": "pt2", 00:12:01.124 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:01.124 "is_configured": true, 00:12:01.124 "data_offset": 2048, 00:12:01.124 "data_size": 63488 00:12:01.124 }, 00:12:01.124 { 00:12:01.124 "name": null, 00:12:01.124 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:01.124 "is_configured": false, 00:12:01.124 "data_offset": 2048, 00:12:01.124 "data_size": 63488 00:12:01.124 } 00:12:01.124 ] 00:12:01.124 }' 00:12:01.124 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.124 04:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.383 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:01.383 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:01.383 04:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.383 04:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.383 04:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.642 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:01.642 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:01.642 04:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.642 04:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.642 [2024-11-27 04:28:57.979513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:01.642 [2024-11-27 04:28:57.979590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.642 [2024-11-27 04:28:57.979619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:01.642 [2024-11-27 04:28:57.979630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.642 [2024-11-27 04:28:57.980183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.642 [2024-11-27 04:28:57.980212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:01.642 [2024-11-27 04:28:57.980307] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:01.642 [2024-11-27 04:28:57.980336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:01.642 [2024-11-27 04:28:57.980482] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:01.642 [2024-11-27 04:28:57.980500] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:01.642 [2024-11-27 04:28:57.980796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:01.642 [2024-11-27 04:28:57.980980] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:01.642 [2024-11-27 04:28:57.981007] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:01.642 [2024-11-27 04:28:57.981176] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.642 pt3 00:12:01.642 04:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.642 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:01.642 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.642 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.642 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.642 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.642 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:01.642 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.642 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.642 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.642 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.642 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.642 04:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.642 04:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.642 04:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.642 04:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.642 04:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.642 "name": "raid_bdev1", 00:12:01.642 "uuid": "0b36167a-029b-4551-bb05-1b5f319e118a", 00:12:01.642 "strip_size_kb": 0, 00:12:01.642 "state": "online", 00:12:01.642 "raid_level": "raid1", 00:12:01.642 "superblock": true, 00:12:01.642 "num_base_bdevs": 3, 00:12:01.642 "num_base_bdevs_discovered": 2, 00:12:01.642 "num_base_bdevs_operational": 2, 00:12:01.642 "base_bdevs_list": [ 00:12:01.642 { 00:12:01.642 "name": null, 00:12:01.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.642 "is_configured": false, 00:12:01.642 "data_offset": 2048, 00:12:01.642 "data_size": 63488 00:12:01.642 }, 00:12:01.642 { 00:12:01.642 "name": "pt2", 00:12:01.642 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:01.642 "is_configured": true, 00:12:01.642 "data_offset": 2048, 00:12:01.642 "data_size": 63488 00:12:01.642 }, 00:12:01.642 { 00:12:01.642 "name": "pt3", 00:12:01.642 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:01.642 "is_configured": true, 00:12:01.642 "data_offset": 2048, 00:12:01.642 "data_size": 63488 00:12:01.642 } 00:12:01.642 ] 00:12:01.642 }' 00:12:01.642 04:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.642 04:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.900 04:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:01.900 04:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:01.900 04:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.900 04:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.900 04:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.900 04:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:01.900 04:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:01.900 04:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:01.900 04:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.900 04:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.900 [2024-11-27 04:28:58.482929] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:02.159 04:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.159 04:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 0b36167a-029b-4551-bb05-1b5f319e118a '!=' 0b36167a-029b-4551-bb05-1b5f319e118a ']' 00:12:02.159 04:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68902 00:12:02.159 04:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68902 ']' 00:12:02.159 04:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68902 00:12:02.159 04:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:02.159 04:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.159 04:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68902 00:12:02.159 04:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:02.159 04:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:02.159 killing process with pid 68902 00:12:02.159 04:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68902' 00:12:02.159 04:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68902 00:12:02.159 [2024-11-27 04:28:58.561843] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:02.159 [2024-11-27 04:28:58.561952] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:02.159 04:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68902 00:12:02.159 [2024-11-27 04:28:58.562021] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:02.159 [2024-11-27 04:28:58.562035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:02.417 [2024-11-27 04:28:58.894175] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:03.793 04:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:03.793 00:12:03.793 real 0m8.287s 00:12:03.793 user 0m12.882s 00:12:03.793 sys 0m1.527s 00:12:03.793 04:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.793 04:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.793 ************************************ 00:12:03.793 END TEST raid_superblock_test 00:12:03.793 ************************************ 00:12:03.793 04:29:00 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:12:03.793 04:29:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:03.793 04:29:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.793 04:29:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:03.793 ************************************ 00:12:03.793 START TEST raid_read_error_test 00:12:03.793 ************************************ 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7Sovia3lUP 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69353 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:03.793 04:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69353 00:12:03.794 04:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69353 ']' 00:12:03.794 04:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.794 04:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:03.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.794 04:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.794 04:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:03.794 04:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.794 [2024-11-27 04:29:00.268460] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:12:03.794 [2024-11-27 04:29:00.268589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69353 ] 00:12:04.053 [2024-11-27 04:29:00.442189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.053 [2024-11-27 04:29:00.563453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.310 [2024-11-27 04:29:00.769680] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.310 [2024-11-27 04:29:00.769746] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.568 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:04.568 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:04.568 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:04.568 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:04.568 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.568 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.826 BaseBdev1_malloc 00:12:04.826 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.826 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:04.826 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.826 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.826 true 00:12:04.826 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.826 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:04.826 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.826 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.826 [2024-11-27 04:29:01.210481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:04.826 [2024-11-27 04:29:01.210549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.826 [2024-11-27 04:29:01.210572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:04.826 [2024-11-27 04:29:01.210585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.826 [2024-11-27 04:29:01.212990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.826 [2024-11-27 04:29:01.213037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:04.826 BaseBdev1 00:12:04.826 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.826 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:04.826 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:04.826 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.826 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.826 BaseBdev2_malloc 00:12:04.826 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.826 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:04.826 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.826 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.826 true 00:12:04.826 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.826 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:04.826 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.826 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.826 [2024-11-27 04:29:01.280502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:04.826 [2024-11-27 04:29:01.280668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.826 [2024-11-27 04:29:01.280713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:04.826 [2024-11-27 04:29:01.280728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.826 [2024-11-27 04:29:01.283203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.827 [2024-11-27 04:29:01.283247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:04.827 BaseBdev2 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.827 BaseBdev3_malloc 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.827 true 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.827 [2024-11-27 04:29:01.368194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:04.827 [2024-11-27 04:29:01.368258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.827 [2024-11-27 04:29:01.368281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:04.827 [2024-11-27 04:29:01.368294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.827 [2024-11-27 04:29:01.370705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.827 [2024-11-27 04:29:01.370804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:04.827 BaseBdev3 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.827 [2024-11-27 04:29:01.380274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:04.827 [2024-11-27 04:29:01.382389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:04.827 [2024-11-27 04:29:01.382475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:04.827 [2024-11-27 04:29:01.382711] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:04.827 [2024-11-27 04:29:01.382726] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:04.827 [2024-11-27 04:29:01.383020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:04.827 [2024-11-27 04:29:01.383248] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:04.827 [2024-11-27 04:29:01.383262] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:04.827 [2024-11-27 04:29:01.383451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.827 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.085 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.085 "name": "raid_bdev1", 00:12:05.085 "uuid": "fb7ef8ce-6cc7-43a8-ab22-71c54b123a9e", 00:12:05.085 "strip_size_kb": 0, 00:12:05.085 "state": "online", 00:12:05.085 "raid_level": "raid1", 00:12:05.085 "superblock": true, 00:12:05.085 "num_base_bdevs": 3, 00:12:05.085 "num_base_bdevs_discovered": 3, 00:12:05.085 "num_base_bdevs_operational": 3, 00:12:05.085 "base_bdevs_list": [ 00:12:05.085 { 00:12:05.085 "name": "BaseBdev1", 00:12:05.085 "uuid": "9861734f-7fb6-5ab0-b7a7-01309b9647c8", 00:12:05.085 "is_configured": true, 00:12:05.085 "data_offset": 2048, 00:12:05.085 "data_size": 63488 00:12:05.085 }, 00:12:05.085 { 00:12:05.085 "name": "BaseBdev2", 00:12:05.085 "uuid": "4ecec48d-1f2c-52bb-a857-ada1b02531c4", 00:12:05.085 "is_configured": true, 00:12:05.085 "data_offset": 2048, 00:12:05.085 "data_size": 63488 00:12:05.085 }, 00:12:05.085 { 00:12:05.085 "name": "BaseBdev3", 00:12:05.085 "uuid": "e259c731-bc9c-52e8-bc65-67cf4efa657b", 00:12:05.085 "is_configured": true, 00:12:05.085 "data_offset": 2048, 00:12:05.085 "data_size": 63488 00:12:05.085 } 00:12:05.085 ] 00:12:05.085 }' 00:12:05.085 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.085 04:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.341 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:05.341 04:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:05.600 [2024-11-27 04:29:01.992829] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:06.537 04:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:06.537 04:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.537 04:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.537 04:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.537 04:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:06.537 04:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:06.537 04:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:06.537 04:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:06.537 04:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:06.537 04:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.537 04:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.537 04:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.537 04:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.537 04:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:06.537 04:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.537 04:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.537 04:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.537 04:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.537 04:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.537 04:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.537 04:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.537 04:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.537 04:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.537 04:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.537 "name": "raid_bdev1", 00:12:06.537 "uuid": "fb7ef8ce-6cc7-43a8-ab22-71c54b123a9e", 00:12:06.537 "strip_size_kb": 0, 00:12:06.537 "state": "online", 00:12:06.537 "raid_level": "raid1", 00:12:06.537 "superblock": true, 00:12:06.537 "num_base_bdevs": 3, 00:12:06.537 "num_base_bdevs_discovered": 3, 00:12:06.538 "num_base_bdevs_operational": 3, 00:12:06.538 "base_bdevs_list": [ 00:12:06.538 { 00:12:06.538 "name": "BaseBdev1", 00:12:06.538 "uuid": "9861734f-7fb6-5ab0-b7a7-01309b9647c8", 00:12:06.538 "is_configured": true, 00:12:06.538 "data_offset": 2048, 00:12:06.538 "data_size": 63488 00:12:06.538 }, 00:12:06.538 { 00:12:06.538 "name": "BaseBdev2", 00:12:06.538 "uuid": "4ecec48d-1f2c-52bb-a857-ada1b02531c4", 00:12:06.538 "is_configured": true, 00:12:06.538 "data_offset": 2048, 00:12:06.538 "data_size": 63488 00:12:06.538 }, 00:12:06.538 { 00:12:06.538 "name": "BaseBdev3", 00:12:06.538 "uuid": "e259c731-bc9c-52e8-bc65-67cf4efa657b", 00:12:06.538 "is_configured": true, 00:12:06.538 "data_offset": 2048, 00:12:06.538 "data_size": 63488 00:12:06.538 } 00:12:06.538 ] 00:12:06.538 }' 00:12:06.538 04:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.538 04:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.107 04:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:07.107 04:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.107 04:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.107 [2024-11-27 04:29:03.401532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:07.107 [2024-11-27 04:29:03.401643] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:07.107 [2024-11-27 04:29:03.404975] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:07.107 [2024-11-27 04:29:03.405073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.107 [2024-11-27 04:29:03.405226] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:07.107 [2024-11-27 04:29:03.405280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:07.107 { 00:12:07.107 "results": [ 00:12:07.107 { 00:12:07.107 "job": "raid_bdev1", 00:12:07.107 "core_mask": "0x1", 00:12:07.107 "workload": "randrw", 00:12:07.107 "percentage": 50, 00:12:07.107 "status": "finished", 00:12:07.107 "queue_depth": 1, 00:12:07.107 "io_size": 131072, 00:12:07.107 "runtime": 1.409526, 00:12:07.107 "iops": 12209.068864284873, 00:12:07.107 "mibps": 1526.133608035609, 00:12:07.107 "io_failed": 0, 00:12:07.107 "io_timeout": 0, 00:12:07.107 "avg_latency_us": 78.84413238629833, 00:12:07.107 "min_latency_us": 25.3764192139738, 00:12:07.107 "max_latency_us": 1516.7720524017468 00:12:07.107 } 00:12:07.107 ], 00:12:07.107 "core_count": 1 00:12:07.107 } 00:12:07.107 04:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.107 04:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69353 00:12:07.107 04:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69353 ']' 00:12:07.107 04:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69353 00:12:07.107 04:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:07.107 04:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:07.107 04:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69353 00:12:07.107 killing process with pid 69353 00:12:07.107 04:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:07.107 04:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:07.107 04:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69353' 00:12:07.107 04:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69353 00:12:07.107 [2024-11-27 04:29:03.449167] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:07.107 04:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69353 00:12:07.368 [2024-11-27 04:29:03.701522] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:08.751 04:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7Sovia3lUP 00:12:08.751 04:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:08.751 04:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:08.751 04:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:08.751 04:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:08.751 04:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:08.751 04:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:08.751 04:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:08.751 00:12:08.751 real 0m4.810s 00:12:08.751 user 0m5.771s 00:12:08.751 sys 0m0.606s 00:12:08.751 04:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.751 04:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.751 ************************************ 00:12:08.751 END TEST raid_read_error_test 00:12:08.751 ************************************ 00:12:08.751 04:29:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:12:08.751 04:29:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:08.751 04:29:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.751 04:29:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:08.751 ************************************ 00:12:08.751 START TEST raid_write_error_test 00:12:08.751 ************************************ 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.z6myVJdkp4 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69499 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69499 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69499 ']' 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:08.751 04:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.751 [2024-11-27 04:29:05.150561] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:12:08.751 [2024-11-27 04:29:05.150784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69499 ] 00:12:08.751 [2024-11-27 04:29:05.310916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.011 [2024-11-27 04:29:05.432900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.270 [2024-11-27 04:29:05.641345] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:09.270 [2024-11-27 04:29:05.641490] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:09.531 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.531 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:09.531 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:09.531 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:09.531 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.531 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.531 BaseBdev1_malloc 00:12:09.531 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.531 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:09.531 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.531 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.531 true 00:12:09.531 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.531 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:09.531 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.531 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.531 [2024-11-27 04:29:06.089064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:09.531 [2024-11-27 04:29:06.089142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.531 [2024-11-27 04:29:06.089182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:09.531 [2024-11-27 04:29:06.089194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.531 [2024-11-27 04:29:06.091543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.531 [2024-11-27 04:29:06.091641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:09.531 BaseBdev1 00:12:09.531 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.531 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:09.531 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:09.531 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.531 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.790 BaseBdev2_malloc 00:12:09.790 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.790 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:09.790 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.790 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.790 true 00:12:09.790 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.790 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:09.790 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.790 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.791 [2024-11-27 04:29:06.155480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:09.791 [2024-11-27 04:29:06.155545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.791 [2024-11-27 04:29:06.155565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:09.791 [2024-11-27 04:29:06.155577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.791 [2024-11-27 04:29:06.157943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.791 [2024-11-27 04:29:06.157989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:09.791 BaseBdev2 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.791 BaseBdev3_malloc 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.791 true 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.791 [2024-11-27 04:29:06.238894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:09.791 [2024-11-27 04:29:06.238959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.791 [2024-11-27 04:29:06.238980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:09.791 [2024-11-27 04:29:06.238994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.791 [2024-11-27 04:29:06.241343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.791 [2024-11-27 04:29:06.241457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:09.791 BaseBdev3 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.791 [2024-11-27 04:29:06.250933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:09.791 [2024-11-27 04:29:06.252873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:09.791 [2024-11-27 04:29:06.252997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:09.791 [2024-11-27 04:29:06.253219] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:09.791 [2024-11-27 04:29:06.253234] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:09.791 [2024-11-27 04:29:06.253528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:09.791 [2024-11-27 04:29:06.253728] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:09.791 [2024-11-27 04:29:06.253740] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:09.791 [2024-11-27 04:29:06.253904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.791 "name": "raid_bdev1", 00:12:09.791 "uuid": "2ba70659-6726-48ba-ad2a-fff78a969c57", 00:12:09.791 "strip_size_kb": 0, 00:12:09.791 "state": "online", 00:12:09.791 "raid_level": "raid1", 00:12:09.791 "superblock": true, 00:12:09.791 "num_base_bdevs": 3, 00:12:09.791 "num_base_bdevs_discovered": 3, 00:12:09.791 "num_base_bdevs_operational": 3, 00:12:09.791 "base_bdevs_list": [ 00:12:09.791 { 00:12:09.791 "name": "BaseBdev1", 00:12:09.791 "uuid": "53c8c8c0-198a-5d9e-8ef3-2e5b4d12bfe6", 00:12:09.791 "is_configured": true, 00:12:09.791 "data_offset": 2048, 00:12:09.791 "data_size": 63488 00:12:09.791 }, 00:12:09.791 { 00:12:09.791 "name": "BaseBdev2", 00:12:09.791 "uuid": "4901dac8-5af1-5190-94c8-2ccc701fa29b", 00:12:09.791 "is_configured": true, 00:12:09.791 "data_offset": 2048, 00:12:09.791 "data_size": 63488 00:12:09.791 }, 00:12:09.791 { 00:12:09.791 "name": "BaseBdev3", 00:12:09.791 "uuid": "82d21312-aa14-52ee-b85c-511c9ae022a8", 00:12:09.791 "is_configured": true, 00:12:09.791 "data_offset": 2048, 00:12:09.791 "data_size": 63488 00:12:09.791 } 00:12:09.791 ] 00:12:09.791 }' 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.791 04:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.359 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:10.359 04:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:10.360 [2024-11-27 04:29:06.783523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:11.300 04:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:11.300 04:29:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.300 04:29:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.300 [2024-11-27 04:29:07.699006] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:11.300 [2024-11-27 04:29:07.699186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:11.300 [2024-11-27 04:29:07.699478] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:12:11.300 04:29:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.300 04:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:11.300 04:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:11.300 04:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:11.300 04:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:12:11.300 04:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:11.300 04:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.300 04:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.300 04:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.300 04:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.300 04:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:11.300 04:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.300 04:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.300 04:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.300 04:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.300 04:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.300 04:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.300 04:29:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.300 04:29:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.300 04:29:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.300 04:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.300 "name": "raid_bdev1", 00:12:11.300 "uuid": "2ba70659-6726-48ba-ad2a-fff78a969c57", 00:12:11.300 "strip_size_kb": 0, 00:12:11.300 "state": "online", 00:12:11.300 "raid_level": "raid1", 00:12:11.300 "superblock": true, 00:12:11.300 "num_base_bdevs": 3, 00:12:11.300 "num_base_bdevs_discovered": 2, 00:12:11.300 "num_base_bdevs_operational": 2, 00:12:11.300 "base_bdevs_list": [ 00:12:11.300 { 00:12:11.300 "name": null, 00:12:11.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.300 "is_configured": false, 00:12:11.300 "data_offset": 0, 00:12:11.300 "data_size": 63488 00:12:11.300 }, 00:12:11.300 { 00:12:11.300 "name": "BaseBdev2", 00:12:11.300 "uuid": "4901dac8-5af1-5190-94c8-2ccc701fa29b", 00:12:11.300 "is_configured": true, 00:12:11.300 "data_offset": 2048, 00:12:11.300 "data_size": 63488 00:12:11.300 }, 00:12:11.300 { 00:12:11.300 "name": "BaseBdev3", 00:12:11.300 "uuid": "82d21312-aa14-52ee-b85c-511c9ae022a8", 00:12:11.300 "is_configured": true, 00:12:11.300 "data_offset": 2048, 00:12:11.300 "data_size": 63488 00:12:11.300 } 00:12:11.300 ] 00:12:11.300 }' 00:12:11.300 04:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.300 04:29:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.560 04:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:11.560 04:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.560 04:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.560 [2024-11-27 04:29:08.141621] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:11.560 [2024-11-27 04:29:08.141740] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:11.560 [2024-11-27 04:29:08.144865] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:11.560 [2024-11-27 04:29:08.144972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.820 [2024-11-27 04:29:08.145120] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:11.820 [2024-11-27 04:29:08.145181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:11.820 { 00:12:11.820 "results": [ 00:12:11.820 { 00:12:11.820 "job": "raid_bdev1", 00:12:11.820 "core_mask": "0x1", 00:12:11.820 "workload": "randrw", 00:12:11.820 "percentage": 50, 00:12:11.820 "status": "finished", 00:12:11.820 "queue_depth": 1, 00:12:11.820 "io_size": 131072, 00:12:11.820 "runtime": 1.358935, 00:12:11.820 "iops": 13704.849753667393, 00:12:11.820 "mibps": 1713.1062192084241, 00:12:11.820 "io_failed": 0, 00:12:11.820 "io_timeout": 0, 00:12:11.820 "avg_latency_us": 70.01831660138957, 00:12:11.820 "min_latency_us": 24.146724890829695, 00:12:11.820 "max_latency_us": 1409.4532751091704 00:12:11.820 } 00:12:11.820 ], 00:12:11.820 "core_count": 1 00:12:11.820 } 00:12:11.820 04:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.820 04:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69499 00:12:11.820 04:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69499 ']' 00:12:11.820 04:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69499 00:12:11.820 04:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:11.820 04:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:11.820 04:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69499 00:12:11.820 killing process with pid 69499 00:12:11.820 04:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:11.820 04:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:11.820 04:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69499' 00:12:11.820 04:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69499 00:12:11.820 [2024-11-27 04:29:08.188970] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:11.820 04:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69499 00:12:12.079 [2024-11-27 04:29:08.434689] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:13.458 04:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.z6myVJdkp4 00:12:13.458 04:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:13.458 04:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:13.458 04:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:13.458 04:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:13.458 04:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:13.458 04:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:13.458 04:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:13.458 00:12:13.458 real 0m4.620s 00:12:13.458 user 0m5.505s 00:12:13.458 sys 0m0.554s 00:12:13.458 04:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:13.458 04:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.458 ************************************ 00:12:13.458 END TEST raid_write_error_test 00:12:13.458 ************************************ 00:12:13.458 04:29:09 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:12:13.458 04:29:09 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:13.458 04:29:09 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:12:13.458 04:29:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:13.458 04:29:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:13.458 04:29:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:13.458 ************************************ 00:12:13.458 START TEST raid_state_function_test 00:12:13.458 ************************************ 00:12:13.458 04:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:12:13.458 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:13.458 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:13.458 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:13.458 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:13.458 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:13.458 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:13.458 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:13.458 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69642 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69642' 00:12:13.459 Process raid pid: 69642 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69642 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69642 ']' 00:12:13.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:13.459 04:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.459 [2024-11-27 04:29:09.833188] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:12:13.459 [2024-11-27 04:29:09.833413] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.459 [2024-11-27 04:29:09.996861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.718 [2024-11-27 04:29:10.119003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.976 [2024-11-27 04:29:10.334025] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.976 [2024-11-27 04:29:10.334168] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.235 04:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:14.235 04:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:14.235 04:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:14.235 04:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.235 04:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.235 [2024-11-27 04:29:10.714583] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:14.235 [2024-11-27 04:29:10.714713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:14.235 [2024-11-27 04:29:10.714729] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:14.235 [2024-11-27 04:29:10.714739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:14.235 [2024-11-27 04:29:10.714745] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:14.235 [2024-11-27 04:29:10.714755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:14.235 [2024-11-27 04:29:10.714761] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:14.235 [2024-11-27 04:29:10.714769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:14.235 04:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.235 04:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:14.235 04:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.235 04:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.235 04:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:14.235 04:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.235 04:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.235 04:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.235 04:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.235 04:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.235 04:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.235 04:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.235 04:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.235 04:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.235 04:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.235 04:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.235 04:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.235 "name": "Existed_Raid", 00:12:14.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.235 "strip_size_kb": 64, 00:12:14.235 "state": "configuring", 00:12:14.235 "raid_level": "raid0", 00:12:14.235 "superblock": false, 00:12:14.235 "num_base_bdevs": 4, 00:12:14.235 "num_base_bdevs_discovered": 0, 00:12:14.235 "num_base_bdevs_operational": 4, 00:12:14.235 "base_bdevs_list": [ 00:12:14.235 { 00:12:14.235 "name": "BaseBdev1", 00:12:14.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.235 "is_configured": false, 00:12:14.235 "data_offset": 0, 00:12:14.235 "data_size": 0 00:12:14.235 }, 00:12:14.235 { 00:12:14.235 "name": "BaseBdev2", 00:12:14.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.235 "is_configured": false, 00:12:14.235 "data_offset": 0, 00:12:14.235 "data_size": 0 00:12:14.235 }, 00:12:14.235 { 00:12:14.235 "name": "BaseBdev3", 00:12:14.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.235 "is_configured": false, 00:12:14.235 "data_offset": 0, 00:12:14.235 "data_size": 0 00:12:14.235 }, 00:12:14.235 { 00:12:14.235 "name": "BaseBdev4", 00:12:14.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.235 "is_configured": false, 00:12:14.235 "data_offset": 0, 00:12:14.235 "data_size": 0 00:12:14.235 } 00:12:14.235 ] 00:12:14.235 }' 00:12:14.235 04:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.235 04:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.804 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:14.804 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.804 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.804 [2024-11-27 04:29:11.093920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:14.804 [2024-11-27 04:29:11.094031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:14.804 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.804 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:14.804 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.804 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.804 [2024-11-27 04:29:11.105883] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:14.804 [2024-11-27 04:29:11.105970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:14.804 [2024-11-27 04:29:11.105999] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:14.804 [2024-11-27 04:29:11.106022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:14.804 [2024-11-27 04:29:11.106040] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:14.804 [2024-11-27 04:29:11.106063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:14.804 [2024-11-27 04:29:11.106100] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:14.804 [2024-11-27 04:29:11.106128] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:14.804 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.804 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:14.804 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.804 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.804 [2024-11-27 04:29:11.156491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:14.804 BaseBdev1 00:12:14.804 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.804 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:14.804 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:14.804 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:14.804 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:14.804 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:14.804 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:14.804 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:14.804 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.804 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.804 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.805 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:14.805 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.805 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.805 [ 00:12:14.805 { 00:12:14.805 "name": "BaseBdev1", 00:12:14.805 "aliases": [ 00:12:14.805 "3d3f4ec2-a8b6-4b2e-886a-5c159f9c2b78" 00:12:14.805 ], 00:12:14.805 "product_name": "Malloc disk", 00:12:14.805 "block_size": 512, 00:12:14.805 "num_blocks": 65536, 00:12:14.805 "uuid": "3d3f4ec2-a8b6-4b2e-886a-5c159f9c2b78", 00:12:14.805 "assigned_rate_limits": { 00:12:14.805 "rw_ios_per_sec": 0, 00:12:14.805 "rw_mbytes_per_sec": 0, 00:12:14.805 "r_mbytes_per_sec": 0, 00:12:14.805 "w_mbytes_per_sec": 0 00:12:14.805 }, 00:12:14.805 "claimed": true, 00:12:14.805 "claim_type": "exclusive_write", 00:12:14.805 "zoned": false, 00:12:14.805 "supported_io_types": { 00:12:14.805 "read": true, 00:12:14.805 "write": true, 00:12:14.805 "unmap": true, 00:12:14.805 "flush": true, 00:12:14.805 "reset": true, 00:12:14.805 "nvme_admin": false, 00:12:14.805 "nvme_io": false, 00:12:14.805 "nvme_io_md": false, 00:12:14.805 "write_zeroes": true, 00:12:14.805 "zcopy": true, 00:12:14.805 "get_zone_info": false, 00:12:14.805 "zone_management": false, 00:12:14.805 "zone_append": false, 00:12:14.805 "compare": false, 00:12:14.805 "compare_and_write": false, 00:12:14.805 "abort": true, 00:12:14.805 "seek_hole": false, 00:12:14.805 "seek_data": false, 00:12:14.805 "copy": true, 00:12:14.805 "nvme_iov_md": false 00:12:14.805 }, 00:12:14.805 "memory_domains": [ 00:12:14.805 { 00:12:14.805 "dma_device_id": "system", 00:12:14.805 "dma_device_type": 1 00:12:14.805 }, 00:12:14.805 { 00:12:14.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.805 "dma_device_type": 2 00:12:14.805 } 00:12:14.805 ], 00:12:14.805 "driver_specific": {} 00:12:14.805 } 00:12:14.805 ] 00:12:14.805 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.805 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:14.805 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:14.805 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.805 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.805 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:14.805 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.805 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.805 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.805 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.805 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.805 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.805 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.805 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.805 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.805 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.805 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.805 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.805 "name": "Existed_Raid", 00:12:14.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.805 "strip_size_kb": 64, 00:12:14.805 "state": "configuring", 00:12:14.805 "raid_level": "raid0", 00:12:14.805 "superblock": false, 00:12:14.805 "num_base_bdevs": 4, 00:12:14.805 "num_base_bdevs_discovered": 1, 00:12:14.805 "num_base_bdevs_operational": 4, 00:12:14.805 "base_bdevs_list": [ 00:12:14.805 { 00:12:14.805 "name": "BaseBdev1", 00:12:14.805 "uuid": "3d3f4ec2-a8b6-4b2e-886a-5c159f9c2b78", 00:12:14.805 "is_configured": true, 00:12:14.805 "data_offset": 0, 00:12:14.805 "data_size": 65536 00:12:14.805 }, 00:12:14.805 { 00:12:14.805 "name": "BaseBdev2", 00:12:14.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.805 "is_configured": false, 00:12:14.805 "data_offset": 0, 00:12:14.805 "data_size": 0 00:12:14.805 }, 00:12:14.805 { 00:12:14.805 "name": "BaseBdev3", 00:12:14.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.805 "is_configured": false, 00:12:14.805 "data_offset": 0, 00:12:14.805 "data_size": 0 00:12:14.805 }, 00:12:14.805 { 00:12:14.805 "name": "BaseBdev4", 00:12:14.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.805 "is_configured": false, 00:12:14.805 "data_offset": 0, 00:12:14.805 "data_size": 0 00:12:14.805 } 00:12:14.805 ] 00:12:14.805 }' 00:12:14.805 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.805 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.064 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:15.064 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.064 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.064 [2024-11-27 04:29:11.623778] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:15.064 [2024-11-27 04:29:11.623912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:15.064 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.064 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:15.064 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.064 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.064 [2024-11-27 04:29:11.631846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:15.064 [2024-11-27 04:29:11.634140] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:15.064 [2024-11-27 04:29:11.634236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:15.064 [2024-11-27 04:29:11.634271] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:15.064 [2024-11-27 04:29:11.634301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:15.064 [2024-11-27 04:29:11.634325] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:15.064 [2024-11-27 04:29:11.634351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:15.064 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.064 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:15.064 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:15.064 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:15.064 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.064 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.064 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:15.064 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.064 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.064 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.064 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.064 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.064 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.064 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.064 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.064 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.064 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.325 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.325 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.325 "name": "Existed_Raid", 00:12:15.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.325 "strip_size_kb": 64, 00:12:15.325 "state": "configuring", 00:12:15.325 "raid_level": "raid0", 00:12:15.325 "superblock": false, 00:12:15.325 "num_base_bdevs": 4, 00:12:15.325 "num_base_bdevs_discovered": 1, 00:12:15.325 "num_base_bdevs_operational": 4, 00:12:15.325 "base_bdevs_list": [ 00:12:15.325 { 00:12:15.325 "name": "BaseBdev1", 00:12:15.325 "uuid": "3d3f4ec2-a8b6-4b2e-886a-5c159f9c2b78", 00:12:15.325 "is_configured": true, 00:12:15.325 "data_offset": 0, 00:12:15.325 "data_size": 65536 00:12:15.325 }, 00:12:15.325 { 00:12:15.325 "name": "BaseBdev2", 00:12:15.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.325 "is_configured": false, 00:12:15.325 "data_offset": 0, 00:12:15.325 "data_size": 0 00:12:15.325 }, 00:12:15.325 { 00:12:15.325 "name": "BaseBdev3", 00:12:15.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.325 "is_configured": false, 00:12:15.325 "data_offset": 0, 00:12:15.325 "data_size": 0 00:12:15.325 }, 00:12:15.325 { 00:12:15.325 "name": "BaseBdev4", 00:12:15.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.325 "is_configured": false, 00:12:15.325 "data_offset": 0, 00:12:15.325 "data_size": 0 00:12:15.325 } 00:12:15.325 ] 00:12:15.325 }' 00:12:15.325 04:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.325 04:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.586 [2024-11-27 04:29:12.103374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:15.586 BaseBdev2 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.586 [ 00:12:15.586 { 00:12:15.586 "name": "BaseBdev2", 00:12:15.586 "aliases": [ 00:12:15.586 "887c670f-23e6-4d48-91c3-d6cff9841fd0" 00:12:15.586 ], 00:12:15.586 "product_name": "Malloc disk", 00:12:15.586 "block_size": 512, 00:12:15.586 "num_blocks": 65536, 00:12:15.586 "uuid": "887c670f-23e6-4d48-91c3-d6cff9841fd0", 00:12:15.586 "assigned_rate_limits": { 00:12:15.586 "rw_ios_per_sec": 0, 00:12:15.586 "rw_mbytes_per_sec": 0, 00:12:15.586 "r_mbytes_per_sec": 0, 00:12:15.586 "w_mbytes_per_sec": 0 00:12:15.586 }, 00:12:15.586 "claimed": true, 00:12:15.586 "claim_type": "exclusive_write", 00:12:15.586 "zoned": false, 00:12:15.586 "supported_io_types": { 00:12:15.586 "read": true, 00:12:15.586 "write": true, 00:12:15.586 "unmap": true, 00:12:15.586 "flush": true, 00:12:15.586 "reset": true, 00:12:15.586 "nvme_admin": false, 00:12:15.586 "nvme_io": false, 00:12:15.586 "nvme_io_md": false, 00:12:15.586 "write_zeroes": true, 00:12:15.586 "zcopy": true, 00:12:15.586 "get_zone_info": false, 00:12:15.586 "zone_management": false, 00:12:15.586 "zone_append": false, 00:12:15.586 "compare": false, 00:12:15.586 "compare_and_write": false, 00:12:15.586 "abort": true, 00:12:15.586 "seek_hole": false, 00:12:15.586 "seek_data": false, 00:12:15.586 "copy": true, 00:12:15.586 "nvme_iov_md": false 00:12:15.586 }, 00:12:15.586 "memory_domains": [ 00:12:15.586 { 00:12:15.586 "dma_device_id": "system", 00:12:15.586 "dma_device_type": 1 00:12:15.586 }, 00:12:15.586 { 00:12:15.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.586 "dma_device_type": 2 00:12:15.586 } 00:12:15.586 ], 00:12:15.586 "driver_specific": {} 00:12:15.586 } 00:12:15.586 ] 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.586 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.587 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.587 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.587 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.847 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.847 "name": "Existed_Raid", 00:12:15.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.847 "strip_size_kb": 64, 00:12:15.847 "state": "configuring", 00:12:15.847 "raid_level": "raid0", 00:12:15.847 "superblock": false, 00:12:15.847 "num_base_bdevs": 4, 00:12:15.847 "num_base_bdevs_discovered": 2, 00:12:15.847 "num_base_bdevs_operational": 4, 00:12:15.847 "base_bdevs_list": [ 00:12:15.847 { 00:12:15.847 "name": "BaseBdev1", 00:12:15.847 "uuid": "3d3f4ec2-a8b6-4b2e-886a-5c159f9c2b78", 00:12:15.847 "is_configured": true, 00:12:15.847 "data_offset": 0, 00:12:15.847 "data_size": 65536 00:12:15.847 }, 00:12:15.847 { 00:12:15.847 "name": "BaseBdev2", 00:12:15.847 "uuid": "887c670f-23e6-4d48-91c3-d6cff9841fd0", 00:12:15.847 "is_configured": true, 00:12:15.847 "data_offset": 0, 00:12:15.847 "data_size": 65536 00:12:15.847 }, 00:12:15.847 { 00:12:15.847 "name": "BaseBdev3", 00:12:15.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.847 "is_configured": false, 00:12:15.847 "data_offset": 0, 00:12:15.847 "data_size": 0 00:12:15.847 }, 00:12:15.847 { 00:12:15.847 "name": "BaseBdev4", 00:12:15.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.847 "is_configured": false, 00:12:15.847 "data_offset": 0, 00:12:15.847 "data_size": 0 00:12:15.847 } 00:12:15.847 ] 00:12:15.847 }' 00:12:15.847 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.847 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.107 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:16.107 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.107 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.107 [2024-11-27 04:29:12.644275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:16.107 BaseBdev3 00:12:16.107 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.107 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:16.107 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:16.107 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:16.107 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:16.107 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:16.107 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:16.107 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:16.107 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.107 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.107 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.107 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:16.107 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.107 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.107 [ 00:12:16.107 { 00:12:16.107 "name": "BaseBdev3", 00:12:16.107 "aliases": [ 00:12:16.107 "34372796-6eb1-4be8-8abe-f36f60579f5e" 00:12:16.107 ], 00:12:16.107 "product_name": "Malloc disk", 00:12:16.107 "block_size": 512, 00:12:16.107 "num_blocks": 65536, 00:12:16.107 "uuid": "34372796-6eb1-4be8-8abe-f36f60579f5e", 00:12:16.107 "assigned_rate_limits": { 00:12:16.107 "rw_ios_per_sec": 0, 00:12:16.107 "rw_mbytes_per_sec": 0, 00:12:16.107 "r_mbytes_per_sec": 0, 00:12:16.107 "w_mbytes_per_sec": 0 00:12:16.107 }, 00:12:16.107 "claimed": true, 00:12:16.107 "claim_type": "exclusive_write", 00:12:16.107 "zoned": false, 00:12:16.107 "supported_io_types": { 00:12:16.107 "read": true, 00:12:16.107 "write": true, 00:12:16.107 "unmap": true, 00:12:16.107 "flush": true, 00:12:16.107 "reset": true, 00:12:16.107 "nvme_admin": false, 00:12:16.107 "nvme_io": false, 00:12:16.107 "nvme_io_md": false, 00:12:16.107 "write_zeroes": true, 00:12:16.107 "zcopy": true, 00:12:16.107 "get_zone_info": false, 00:12:16.107 "zone_management": false, 00:12:16.107 "zone_append": false, 00:12:16.107 "compare": false, 00:12:16.107 "compare_and_write": false, 00:12:16.107 "abort": true, 00:12:16.107 "seek_hole": false, 00:12:16.107 "seek_data": false, 00:12:16.107 "copy": true, 00:12:16.107 "nvme_iov_md": false 00:12:16.107 }, 00:12:16.107 "memory_domains": [ 00:12:16.107 { 00:12:16.107 "dma_device_id": "system", 00:12:16.107 "dma_device_type": 1 00:12:16.107 }, 00:12:16.107 { 00:12:16.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.107 "dma_device_type": 2 00:12:16.107 } 00:12:16.107 ], 00:12:16.107 "driver_specific": {} 00:12:16.107 } 00:12:16.107 ] 00:12:16.107 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.107 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:16.107 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:16.108 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:16.108 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:16.108 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.108 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.108 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:16.108 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.108 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.108 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.108 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.108 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.108 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.108 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.108 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.108 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.367 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.367 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.367 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.367 "name": "Existed_Raid", 00:12:16.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.367 "strip_size_kb": 64, 00:12:16.367 "state": "configuring", 00:12:16.367 "raid_level": "raid0", 00:12:16.367 "superblock": false, 00:12:16.367 "num_base_bdevs": 4, 00:12:16.367 "num_base_bdevs_discovered": 3, 00:12:16.367 "num_base_bdevs_operational": 4, 00:12:16.367 "base_bdevs_list": [ 00:12:16.367 { 00:12:16.367 "name": "BaseBdev1", 00:12:16.367 "uuid": "3d3f4ec2-a8b6-4b2e-886a-5c159f9c2b78", 00:12:16.367 "is_configured": true, 00:12:16.367 "data_offset": 0, 00:12:16.367 "data_size": 65536 00:12:16.367 }, 00:12:16.367 { 00:12:16.367 "name": "BaseBdev2", 00:12:16.367 "uuid": "887c670f-23e6-4d48-91c3-d6cff9841fd0", 00:12:16.367 "is_configured": true, 00:12:16.367 "data_offset": 0, 00:12:16.367 "data_size": 65536 00:12:16.367 }, 00:12:16.367 { 00:12:16.367 "name": "BaseBdev3", 00:12:16.367 "uuid": "34372796-6eb1-4be8-8abe-f36f60579f5e", 00:12:16.367 "is_configured": true, 00:12:16.367 "data_offset": 0, 00:12:16.367 "data_size": 65536 00:12:16.367 }, 00:12:16.367 { 00:12:16.367 "name": "BaseBdev4", 00:12:16.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.367 "is_configured": false, 00:12:16.367 "data_offset": 0, 00:12:16.367 "data_size": 0 00:12:16.367 } 00:12:16.367 ] 00:12:16.367 }' 00:12:16.367 04:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.367 04:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.627 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:16.627 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.627 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.627 [2024-11-27 04:29:13.208037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:16.627 [2024-11-27 04:29:13.208115] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:16.627 [2024-11-27 04:29:13.208133] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:16.627 [2024-11-27 04:29:13.208483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:16.627 [2024-11-27 04:29:13.208691] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:16.627 [2024-11-27 04:29:13.208706] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:16.627 [2024-11-27 04:29:13.209024] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.887 BaseBdev4 00:12:16.887 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.887 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:16.887 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:16.887 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:16.887 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:16.887 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:16.887 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:16.887 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:16.887 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.887 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.887 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.887 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:16.887 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.887 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.887 [ 00:12:16.887 { 00:12:16.887 "name": "BaseBdev4", 00:12:16.887 "aliases": [ 00:12:16.887 "cec27b84-e781-4c71-ab72-9838cd5ece7a" 00:12:16.887 ], 00:12:16.887 "product_name": "Malloc disk", 00:12:16.887 "block_size": 512, 00:12:16.887 "num_blocks": 65536, 00:12:16.887 "uuid": "cec27b84-e781-4c71-ab72-9838cd5ece7a", 00:12:16.887 "assigned_rate_limits": { 00:12:16.887 "rw_ios_per_sec": 0, 00:12:16.887 "rw_mbytes_per_sec": 0, 00:12:16.887 "r_mbytes_per_sec": 0, 00:12:16.887 "w_mbytes_per_sec": 0 00:12:16.887 }, 00:12:16.887 "claimed": true, 00:12:16.887 "claim_type": "exclusive_write", 00:12:16.887 "zoned": false, 00:12:16.887 "supported_io_types": { 00:12:16.887 "read": true, 00:12:16.887 "write": true, 00:12:16.887 "unmap": true, 00:12:16.887 "flush": true, 00:12:16.887 "reset": true, 00:12:16.887 "nvme_admin": false, 00:12:16.887 "nvme_io": false, 00:12:16.887 "nvme_io_md": false, 00:12:16.887 "write_zeroes": true, 00:12:16.887 "zcopy": true, 00:12:16.887 "get_zone_info": false, 00:12:16.887 "zone_management": false, 00:12:16.887 "zone_append": false, 00:12:16.887 "compare": false, 00:12:16.887 "compare_and_write": false, 00:12:16.887 "abort": true, 00:12:16.887 "seek_hole": false, 00:12:16.887 "seek_data": false, 00:12:16.887 "copy": true, 00:12:16.887 "nvme_iov_md": false 00:12:16.887 }, 00:12:16.887 "memory_domains": [ 00:12:16.887 { 00:12:16.887 "dma_device_id": "system", 00:12:16.887 "dma_device_type": 1 00:12:16.887 }, 00:12:16.887 { 00:12:16.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.887 "dma_device_type": 2 00:12:16.887 } 00:12:16.887 ], 00:12:16.887 "driver_specific": {} 00:12:16.887 } 00:12:16.887 ] 00:12:16.887 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.887 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:16.887 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:16.887 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:16.887 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:16.887 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.887 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.887 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:16.887 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.888 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.888 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.888 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.888 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.888 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.888 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.888 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.888 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.888 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.888 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.888 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.888 "name": "Existed_Raid", 00:12:16.888 "uuid": "41367cc8-72da-4624-a1cc-ad9271695e2e", 00:12:16.888 "strip_size_kb": 64, 00:12:16.888 "state": "online", 00:12:16.888 "raid_level": "raid0", 00:12:16.888 "superblock": false, 00:12:16.888 "num_base_bdevs": 4, 00:12:16.888 "num_base_bdevs_discovered": 4, 00:12:16.888 "num_base_bdevs_operational": 4, 00:12:16.888 "base_bdevs_list": [ 00:12:16.888 { 00:12:16.888 "name": "BaseBdev1", 00:12:16.888 "uuid": "3d3f4ec2-a8b6-4b2e-886a-5c159f9c2b78", 00:12:16.888 "is_configured": true, 00:12:16.888 "data_offset": 0, 00:12:16.888 "data_size": 65536 00:12:16.888 }, 00:12:16.888 { 00:12:16.888 "name": "BaseBdev2", 00:12:16.888 "uuid": "887c670f-23e6-4d48-91c3-d6cff9841fd0", 00:12:16.888 "is_configured": true, 00:12:16.888 "data_offset": 0, 00:12:16.888 "data_size": 65536 00:12:16.888 }, 00:12:16.888 { 00:12:16.888 "name": "BaseBdev3", 00:12:16.888 "uuid": "34372796-6eb1-4be8-8abe-f36f60579f5e", 00:12:16.888 "is_configured": true, 00:12:16.888 "data_offset": 0, 00:12:16.888 "data_size": 65536 00:12:16.888 }, 00:12:16.888 { 00:12:16.888 "name": "BaseBdev4", 00:12:16.888 "uuid": "cec27b84-e781-4c71-ab72-9838cd5ece7a", 00:12:16.888 "is_configured": true, 00:12:16.888 "data_offset": 0, 00:12:16.888 "data_size": 65536 00:12:16.888 } 00:12:16.888 ] 00:12:16.888 }' 00:12:16.888 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.888 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.147 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:17.147 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:17.147 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:17.147 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:17.147 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:17.147 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:17.147 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:17.147 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:17.147 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.147 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.147 [2024-11-27 04:29:13.683927] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:17.147 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.147 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:17.147 "name": "Existed_Raid", 00:12:17.147 "aliases": [ 00:12:17.147 "41367cc8-72da-4624-a1cc-ad9271695e2e" 00:12:17.147 ], 00:12:17.147 "product_name": "Raid Volume", 00:12:17.147 "block_size": 512, 00:12:17.147 "num_blocks": 262144, 00:12:17.147 "uuid": "41367cc8-72da-4624-a1cc-ad9271695e2e", 00:12:17.147 "assigned_rate_limits": { 00:12:17.147 "rw_ios_per_sec": 0, 00:12:17.147 "rw_mbytes_per_sec": 0, 00:12:17.147 "r_mbytes_per_sec": 0, 00:12:17.147 "w_mbytes_per_sec": 0 00:12:17.147 }, 00:12:17.147 "claimed": false, 00:12:17.147 "zoned": false, 00:12:17.147 "supported_io_types": { 00:12:17.147 "read": true, 00:12:17.147 "write": true, 00:12:17.147 "unmap": true, 00:12:17.147 "flush": true, 00:12:17.147 "reset": true, 00:12:17.147 "nvme_admin": false, 00:12:17.147 "nvme_io": false, 00:12:17.147 "nvme_io_md": false, 00:12:17.147 "write_zeroes": true, 00:12:17.147 "zcopy": false, 00:12:17.147 "get_zone_info": false, 00:12:17.147 "zone_management": false, 00:12:17.147 "zone_append": false, 00:12:17.147 "compare": false, 00:12:17.147 "compare_and_write": false, 00:12:17.147 "abort": false, 00:12:17.147 "seek_hole": false, 00:12:17.147 "seek_data": false, 00:12:17.147 "copy": false, 00:12:17.147 "nvme_iov_md": false 00:12:17.147 }, 00:12:17.147 "memory_domains": [ 00:12:17.147 { 00:12:17.147 "dma_device_id": "system", 00:12:17.147 "dma_device_type": 1 00:12:17.147 }, 00:12:17.147 { 00:12:17.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.147 "dma_device_type": 2 00:12:17.147 }, 00:12:17.147 { 00:12:17.147 "dma_device_id": "system", 00:12:17.147 "dma_device_type": 1 00:12:17.147 }, 00:12:17.147 { 00:12:17.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.147 "dma_device_type": 2 00:12:17.147 }, 00:12:17.147 { 00:12:17.147 "dma_device_id": "system", 00:12:17.147 "dma_device_type": 1 00:12:17.147 }, 00:12:17.147 { 00:12:17.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.147 "dma_device_type": 2 00:12:17.147 }, 00:12:17.147 { 00:12:17.147 "dma_device_id": "system", 00:12:17.147 "dma_device_type": 1 00:12:17.147 }, 00:12:17.147 { 00:12:17.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.147 "dma_device_type": 2 00:12:17.147 } 00:12:17.147 ], 00:12:17.147 "driver_specific": { 00:12:17.147 "raid": { 00:12:17.147 "uuid": "41367cc8-72da-4624-a1cc-ad9271695e2e", 00:12:17.147 "strip_size_kb": 64, 00:12:17.147 "state": "online", 00:12:17.147 "raid_level": "raid0", 00:12:17.147 "superblock": false, 00:12:17.147 "num_base_bdevs": 4, 00:12:17.147 "num_base_bdevs_discovered": 4, 00:12:17.147 "num_base_bdevs_operational": 4, 00:12:17.147 "base_bdevs_list": [ 00:12:17.147 { 00:12:17.147 "name": "BaseBdev1", 00:12:17.147 "uuid": "3d3f4ec2-a8b6-4b2e-886a-5c159f9c2b78", 00:12:17.147 "is_configured": true, 00:12:17.147 "data_offset": 0, 00:12:17.147 "data_size": 65536 00:12:17.147 }, 00:12:17.147 { 00:12:17.147 "name": "BaseBdev2", 00:12:17.147 "uuid": "887c670f-23e6-4d48-91c3-d6cff9841fd0", 00:12:17.147 "is_configured": true, 00:12:17.147 "data_offset": 0, 00:12:17.147 "data_size": 65536 00:12:17.147 }, 00:12:17.147 { 00:12:17.147 "name": "BaseBdev3", 00:12:17.147 "uuid": "34372796-6eb1-4be8-8abe-f36f60579f5e", 00:12:17.147 "is_configured": true, 00:12:17.147 "data_offset": 0, 00:12:17.147 "data_size": 65536 00:12:17.147 }, 00:12:17.147 { 00:12:17.147 "name": "BaseBdev4", 00:12:17.147 "uuid": "cec27b84-e781-4c71-ab72-9838cd5ece7a", 00:12:17.147 "is_configured": true, 00:12:17.147 "data_offset": 0, 00:12:17.147 "data_size": 65536 00:12:17.147 } 00:12:17.147 ] 00:12:17.147 } 00:12:17.147 } 00:12:17.147 }' 00:12:17.147 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:17.406 BaseBdev2 00:12:17.406 BaseBdev3 00:12:17.406 BaseBdev4' 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.406 04:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.667 [2024-11-27 04:29:14.015053] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:17.667 [2024-11-27 04:29:14.015150] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:17.667 [2024-11-27 04:29:14.015264] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.667 "name": "Existed_Raid", 00:12:17.667 "uuid": "41367cc8-72da-4624-a1cc-ad9271695e2e", 00:12:17.667 "strip_size_kb": 64, 00:12:17.667 "state": "offline", 00:12:17.667 "raid_level": "raid0", 00:12:17.667 "superblock": false, 00:12:17.667 "num_base_bdevs": 4, 00:12:17.667 "num_base_bdevs_discovered": 3, 00:12:17.667 "num_base_bdevs_operational": 3, 00:12:17.667 "base_bdevs_list": [ 00:12:17.667 { 00:12:17.667 "name": null, 00:12:17.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.667 "is_configured": false, 00:12:17.667 "data_offset": 0, 00:12:17.667 "data_size": 65536 00:12:17.667 }, 00:12:17.667 { 00:12:17.667 "name": "BaseBdev2", 00:12:17.667 "uuid": "887c670f-23e6-4d48-91c3-d6cff9841fd0", 00:12:17.667 "is_configured": true, 00:12:17.667 "data_offset": 0, 00:12:17.667 "data_size": 65536 00:12:17.667 }, 00:12:17.667 { 00:12:17.667 "name": "BaseBdev3", 00:12:17.667 "uuid": "34372796-6eb1-4be8-8abe-f36f60579f5e", 00:12:17.667 "is_configured": true, 00:12:17.667 "data_offset": 0, 00:12:17.667 "data_size": 65536 00:12:17.667 }, 00:12:17.667 { 00:12:17.667 "name": "BaseBdev4", 00:12:17.667 "uuid": "cec27b84-e781-4c71-ab72-9838cd5ece7a", 00:12:17.667 "is_configured": true, 00:12:17.667 "data_offset": 0, 00:12:17.667 "data_size": 65536 00:12:17.667 } 00:12:17.667 ] 00:12:17.667 }' 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.667 04:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.253 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:18.253 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:18.253 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:18.253 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.253 04:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.253 04:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.253 04:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.253 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:18.253 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:18.253 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:18.253 04:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.253 04:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.253 [2024-11-27 04:29:14.659228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:18.254 04:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.254 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:18.254 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:18.254 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.254 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:18.254 04:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.254 04:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.254 04:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.254 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:18.254 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:18.254 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:18.254 04:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.254 04:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.254 [2024-11-27 04:29:14.828848] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:18.513 04:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.513 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:18.513 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:18.513 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:18.513 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.513 04:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.513 04:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.513 04:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.513 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:18.513 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:18.513 04:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:18.513 04:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.513 04:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.513 [2024-11-27 04:29:15.000383] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:18.513 [2024-11-27 04:29:15.000439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.772 BaseBdev2 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.772 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.772 [ 00:12:18.772 { 00:12:18.772 "name": "BaseBdev2", 00:12:18.773 "aliases": [ 00:12:18.773 "d529cfc3-da7d-43e5-8c22-d3b5027de3ae" 00:12:18.773 ], 00:12:18.773 "product_name": "Malloc disk", 00:12:18.773 "block_size": 512, 00:12:18.773 "num_blocks": 65536, 00:12:18.773 "uuid": "d529cfc3-da7d-43e5-8c22-d3b5027de3ae", 00:12:18.773 "assigned_rate_limits": { 00:12:18.773 "rw_ios_per_sec": 0, 00:12:18.773 "rw_mbytes_per_sec": 0, 00:12:18.773 "r_mbytes_per_sec": 0, 00:12:18.773 "w_mbytes_per_sec": 0 00:12:18.773 }, 00:12:18.773 "claimed": false, 00:12:18.773 "zoned": false, 00:12:18.773 "supported_io_types": { 00:12:18.773 "read": true, 00:12:18.773 "write": true, 00:12:18.773 "unmap": true, 00:12:18.773 "flush": true, 00:12:18.773 "reset": true, 00:12:18.773 "nvme_admin": false, 00:12:18.773 "nvme_io": false, 00:12:18.773 "nvme_io_md": false, 00:12:18.773 "write_zeroes": true, 00:12:18.773 "zcopy": true, 00:12:18.773 "get_zone_info": false, 00:12:18.773 "zone_management": false, 00:12:18.773 "zone_append": false, 00:12:18.773 "compare": false, 00:12:18.773 "compare_and_write": false, 00:12:18.773 "abort": true, 00:12:18.773 "seek_hole": false, 00:12:18.773 "seek_data": false, 00:12:18.773 "copy": true, 00:12:18.773 "nvme_iov_md": false 00:12:18.773 }, 00:12:18.773 "memory_domains": [ 00:12:18.773 { 00:12:18.773 "dma_device_id": "system", 00:12:18.773 "dma_device_type": 1 00:12:18.773 }, 00:12:18.773 { 00:12:18.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.773 "dma_device_type": 2 00:12:18.773 } 00:12:18.773 ], 00:12:18.773 "driver_specific": {} 00:12:18.773 } 00:12:18.773 ] 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.773 BaseBdev3 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.773 [ 00:12:18.773 { 00:12:18.773 "name": "BaseBdev3", 00:12:18.773 "aliases": [ 00:12:18.773 "70f7c68f-a6c9-4f08-b653-72a62de16bae" 00:12:18.773 ], 00:12:18.773 "product_name": "Malloc disk", 00:12:18.773 "block_size": 512, 00:12:18.773 "num_blocks": 65536, 00:12:18.773 "uuid": "70f7c68f-a6c9-4f08-b653-72a62de16bae", 00:12:18.773 "assigned_rate_limits": { 00:12:18.773 "rw_ios_per_sec": 0, 00:12:18.773 "rw_mbytes_per_sec": 0, 00:12:18.773 "r_mbytes_per_sec": 0, 00:12:18.773 "w_mbytes_per_sec": 0 00:12:18.773 }, 00:12:18.773 "claimed": false, 00:12:18.773 "zoned": false, 00:12:18.773 "supported_io_types": { 00:12:18.773 "read": true, 00:12:18.773 "write": true, 00:12:18.773 "unmap": true, 00:12:18.773 "flush": true, 00:12:18.773 "reset": true, 00:12:18.773 "nvme_admin": false, 00:12:18.773 "nvme_io": false, 00:12:18.773 "nvme_io_md": false, 00:12:18.773 "write_zeroes": true, 00:12:18.773 "zcopy": true, 00:12:18.773 "get_zone_info": false, 00:12:18.773 "zone_management": false, 00:12:18.773 "zone_append": false, 00:12:18.773 "compare": false, 00:12:18.773 "compare_and_write": false, 00:12:18.773 "abort": true, 00:12:18.773 "seek_hole": false, 00:12:18.773 "seek_data": false, 00:12:18.773 "copy": true, 00:12:18.773 "nvme_iov_md": false 00:12:18.773 }, 00:12:18.773 "memory_domains": [ 00:12:18.773 { 00:12:18.773 "dma_device_id": "system", 00:12:18.773 "dma_device_type": 1 00:12:18.773 }, 00:12:18.773 { 00:12:18.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.773 "dma_device_type": 2 00:12:18.773 } 00:12:18.773 ], 00:12:18.773 "driver_specific": {} 00:12:18.773 } 00:12:18.773 ] 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.773 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.032 BaseBdev4 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.033 [ 00:12:19.033 { 00:12:19.033 "name": "BaseBdev4", 00:12:19.033 "aliases": [ 00:12:19.033 "0cd7d3c5-3ff7-41e7-9895-a9bf651520cd" 00:12:19.033 ], 00:12:19.033 "product_name": "Malloc disk", 00:12:19.033 "block_size": 512, 00:12:19.033 "num_blocks": 65536, 00:12:19.033 "uuid": "0cd7d3c5-3ff7-41e7-9895-a9bf651520cd", 00:12:19.033 "assigned_rate_limits": { 00:12:19.033 "rw_ios_per_sec": 0, 00:12:19.033 "rw_mbytes_per_sec": 0, 00:12:19.033 "r_mbytes_per_sec": 0, 00:12:19.033 "w_mbytes_per_sec": 0 00:12:19.033 }, 00:12:19.033 "claimed": false, 00:12:19.033 "zoned": false, 00:12:19.033 "supported_io_types": { 00:12:19.033 "read": true, 00:12:19.033 "write": true, 00:12:19.033 "unmap": true, 00:12:19.033 "flush": true, 00:12:19.033 "reset": true, 00:12:19.033 "nvme_admin": false, 00:12:19.033 "nvme_io": false, 00:12:19.033 "nvme_io_md": false, 00:12:19.033 "write_zeroes": true, 00:12:19.033 "zcopy": true, 00:12:19.033 "get_zone_info": false, 00:12:19.033 "zone_management": false, 00:12:19.033 "zone_append": false, 00:12:19.033 "compare": false, 00:12:19.033 "compare_and_write": false, 00:12:19.033 "abort": true, 00:12:19.033 "seek_hole": false, 00:12:19.033 "seek_data": false, 00:12:19.033 "copy": true, 00:12:19.033 "nvme_iov_md": false 00:12:19.033 }, 00:12:19.033 "memory_domains": [ 00:12:19.033 { 00:12:19.033 "dma_device_id": "system", 00:12:19.033 "dma_device_type": 1 00:12:19.033 }, 00:12:19.033 { 00:12:19.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.033 "dma_device_type": 2 00:12:19.033 } 00:12:19.033 ], 00:12:19.033 "driver_specific": {} 00:12:19.033 } 00:12:19.033 ] 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.033 [2024-11-27 04:29:15.444540] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:19.033 [2024-11-27 04:29:15.444596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:19.033 [2024-11-27 04:29:15.444626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:19.033 [2024-11-27 04:29:15.446774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:19.033 [2024-11-27 04:29:15.446842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.033 "name": "Existed_Raid", 00:12:19.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.033 "strip_size_kb": 64, 00:12:19.033 "state": "configuring", 00:12:19.033 "raid_level": "raid0", 00:12:19.033 "superblock": false, 00:12:19.033 "num_base_bdevs": 4, 00:12:19.033 "num_base_bdevs_discovered": 3, 00:12:19.033 "num_base_bdevs_operational": 4, 00:12:19.033 "base_bdevs_list": [ 00:12:19.033 { 00:12:19.033 "name": "BaseBdev1", 00:12:19.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.033 "is_configured": false, 00:12:19.033 "data_offset": 0, 00:12:19.033 "data_size": 0 00:12:19.033 }, 00:12:19.033 { 00:12:19.033 "name": "BaseBdev2", 00:12:19.033 "uuid": "d529cfc3-da7d-43e5-8c22-d3b5027de3ae", 00:12:19.033 "is_configured": true, 00:12:19.033 "data_offset": 0, 00:12:19.033 "data_size": 65536 00:12:19.033 }, 00:12:19.033 { 00:12:19.033 "name": "BaseBdev3", 00:12:19.033 "uuid": "70f7c68f-a6c9-4f08-b653-72a62de16bae", 00:12:19.033 "is_configured": true, 00:12:19.033 "data_offset": 0, 00:12:19.033 "data_size": 65536 00:12:19.033 }, 00:12:19.033 { 00:12:19.033 "name": "BaseBdev4", 00:12:19.033 "uuid": "0cd7d3c5-3ff7-41e7-9895-a9bf651520cd", 00:12:19.033 "is_configured": true, 00:12:19.033 "data_offset": 0, 00:12:19.033 "data_size": 65536 00:12:19.033 } 00:12:19.033 ] 00:12:19.033 }' 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.033 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.601 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:19.602 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.602 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.602 [2024-11-27 04:29:15.895785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:19.602 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.602 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:19.602 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.602 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.602 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:19.602 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.602 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.602 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.602 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.602 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.602 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.602 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.602 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.602 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.602 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.602 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.602 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.602 "name": "Existed_Raid", 00:12:19.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.602 "strip_size_kb": 64, 00:12:19.602 "state": "configuring", 00:12:19.602 "raid_level": "raid0", 00:12:19.602 "superblock": false, 00:12:19.602 "num_base_bdevs": 4, 00:12:19.602 "num_base_bdevs_discovered": 2, 00:12:19.602 "num_base_bdevs_operational": 4, 00:12:19.602 "base_bdevs_list": [ 00:12:19.602 { 00:12:19.602 "name": "BaseBdev1", 00:12:19.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.602 "is_configured": false, 00:12:19.602 "data_offset": 0, 00:12:19.602 "data_size": 0 00:12:19.602 }, 00:12:19.602 { 00:12:19.602 "name": null, 00:12:19.602 "uuid": "d529cfc3-da7d-43e5-8c22-d3b5027de3ae", 00:12:19.602 "is_configured": false, 00:12:19.602 "data_offset": 0, 00:12:19.602 "data_size": 65536 00:12:19.602 }, 00:12:19.602 { 00:12:19.602 "name": "BaseBdev3", 00:12:19.602 "uuid": "70f7c68f-a6c9-4f08-b653-72a62de16bae", 00:12:19.602 "is_configured": true, 00:12:19.602 "data_offset": 0, 00:12:19.602 "data_size": 65536 00:12:19.602 }, 00:12:19.602 { 00:12:19.602 "name": "BaseBdev4", 00:12:19.602 "uuid": "0cd7d3c5-3ff7-41e7-9895-a9bf651520cd", 00:12:19.602 "is_configured": true, 00:12:19.602 "data_offset": 0, 00:12:19.602 "data_size": 65536 00:12:19.602 } 00:12:19.602 ] 00:12:19.602 }' 00:12:19.602 04:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.602 04:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.861 04:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.861 04:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:19.861 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.861 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.861 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.861 04:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:19.861 04:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:19.861 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.861 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.861 [2024-11-27 04:29:16.443921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:20.121 BaseBdev1 00:12:20.121 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.121 04:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:20.121 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:20.121 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:20.121 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:20.121 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:20.121 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:20.121 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:20.121 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.121 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.121 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.121 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:20.121 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.121 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.121 [ 00:12:20.121 { 00:12:20.121 "name": "BaseBdev1", 00:12:20.121 "aliases": [ 00:12:20.121 "208c528e-11f2-4f9b-baaf-ef4c34abfdad" 00:12:20.121 ], 00:12:20.121 "product_name": "Malloc disk", 00:12:20.121 "block_size": 512, 00:12:20.121 "num_blocks": 65536, 00:12:20.121 "uuid": "208c528e-11f2-4f9b-baaf-ef4c34abfdad", 00:12:20.121 "assigned_rate_limits": { 00:12:20.121 "rw_ios_per_sec": 0, 00:12:20.121 "rw_mbytes_per_sec": 0, 00:12:20.121 "r_mbytes_per_sec": 0, 00:12:20.121 "w_mbytes_per_sec": 0 00:12:20.121 }, 00:12:20.121 "claimed": true, 00:12:20.121 "claim_type": "exclusive_write", 00:12:20.121 "zoned": false, 00:12:20.121 "supported_io_types": { 00:12:20.121 "read": true, 00:12:20.121 "write": true, 00:12:20.121 "unmap": true, 00:12:20.121 "flush": true, 00:12:20.121 "reset": true, 00:12:20.121 "nvme_admin": false, 00:12:20.121 "nvme_io": false, 00:12:20.121 "nvme_io_md": false, 00:12:20.121 "write_zeroes": true, 00:12:20.121 "zcopy": true, 00:12:20.121 "get_zone_info": false, 00:12:20.121 "zone_management": false, 00:12:20.121 "zone_append": false, 00:12:20.121 "compare": false, 00:12:20.121 "compare_and_write": false, 00:12:20.121 "abort": true, 00:12:20.121 "seek_hole": false, 00:12:20.121 "seek_data": false, 00:12:20.121 "copy": true, 00:12:20.121 "nvme_iov_md": false 00:12:20.121 }, 00:12:20.121 "memory_domains": [ 00:12:20.121 { 00:12:20.121 "dma_device_id": "system", 00:12:20.121 "dma_device_type": 1 00:12:20.121 }, 00:12:20.121 { 00:12:20.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.121 "dma_device_type": 2 00:12:20.121 } 00:12:20.121 ], 00:12:20.121 "driver_specific": {} 00:12:20.121 } 00:12:20.121 ] 00:12:20.122 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.122 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:20.122 04:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:20.122 04:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.122 04:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.122 04:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:20.122 04:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.122 04:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.122 04:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.122 04:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.122 04:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.122 04:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.122 04:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.122 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.122 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.122 04:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.122 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.122 04:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.122 "name": "Existed_Raid", 00:12:20.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.122 "strip_size_kb": 64, 00:12:20.122 "state": "configuring", 00:12:20.122 "raid_level": "raid0", 00:12:20.122 "superblock": false, 00:12:20.122 "num_base_bdevs": 4, 00:12:20.122 "num_base_bdevs_discovered": 3, 00:12:20.122 "num_base_bdevs_operational": 4, 00:12:20.122 "base_bdevs_list": [ 00:12:20.122 { 00:12:20.122 "name": "BaseBdev1", 00:12:20.122 "uuid": "208c528e-11f2-4f9b-baaf-ef4c34abfdad", 00:12:20.122 "is_configured": true, 00:12:20.122 "data_offset": 0, 00:12:20.122 "data_size": 65536 00:12:20.122 }, 00:12:20.122 { 00:12:20.122 "name": null, 00:12:20.122 "uuid": "d529cfc3-da7d-43e5-8c22-d3b5027de3ae", 00:12:20.122 "is_configured": false, 00:12:20.122 "data_offset": 0, 00:12:20.122 "data_size": 65536 00:12:20.122 }, 00:12:20.122 { 00:12:20.122 "name": "BaseBdev3", 00:12:20.122 "uuid": "70f7c68f-a6c9-4f08-b653-72a62de16bae", 00:12:20.122 "is_configured": true, 00:12:20.122 "data_offset": 0, 00:12:20.122 "data_size": 65536 00:12:20.122 }, 00:12:20.122 { 00:12:20.122 "name": "BaseBdev4", 00:12:20.122 "uuid": "0cd7d3c5-3ff7-41e7-9895-a9bf651520cd", 00:12:20.122 "is_configured": true, 00:12:20.122 "data_offset": 0, 00:12:20.122 "data_size": 65536 00:12:20.122 } 00:12:20.122 ] 00:12:20.122 }' 00:12:20.122 04:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.122 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.383 04:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.383 04:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:20.383 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.383 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.643 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.643 04:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:20.643 04:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:20.643 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.643 04:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.643 [2024-11-27 04:29:17.003156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:20.643 04:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.643 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:20.643 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.643 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.643 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:20.643 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.643 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.643 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.643 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.643 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.643 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.643 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.643 04:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.644 04:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.644 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.644 04:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.644 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.644 "name": "Existed_Raid", 00:12:20.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.644 "strip_size_kb": 64, 00:12:20.644 "state": "configuring", 00:12:20.644 "raid_level": "raid0", 00:12:20.644 "superblock": false, 00:12:20.644 "num_base_bdevs": 4, 00:12:20.644 "num_base_bdevs_discovered": 2, 00:12:20.644 "num_base_bdevs_operational": 4, 00:12:20.644 "base_bdevs_list": [ 00:12:20.644 { 00:12:20.644 "name": "BaseBdev1", 00:12:20.644 "uuid": "208c528e-11f2-4f9b-baaf-ef4c34abfdad", 00:12:20.644 "is_configured": true, 00:12:20.644 "data_offset": 0, 00:12:20.644 "data_size": 65536 00:12:20.644 }, 00:12:20.644 { 00:12:20.644 "name": null, 00:12:20.644 "uuid": "d529cfc3-da7d-43e5-8c22-d3b5027de3ae", 00:12:20.644 "is_configured": false, 00:12:20.644 "data_offset": 0, 00:12:20.644 "data_size": 65536 00:12:20.644 }, 00:12:20.644 { 00:12:20.644 "name": null, 00:12:20.644 "uuid": "70f7c68f-a6c9-4f08-b653-72a62de16bae", 00:12:20.644 "is_configured": false, 00:12:20.644 "data_offset": 0, 00:12:20.644 "data_size": 65536 00:12:20.644 }, 00:12:20.644 { 00:12:20.644 "name": "BaseBdev4", 00:12:20.644 "uuid": "0cd7d3c5-3ff7-41e7-9895-a9bf651520cd", 00:12:20.644 "is_configured": true, 00:12:20.644 "data_offset": 0, 00:12:20.644 "data_size": 65536 00:12:20.644 } 00:12:20.644 ] 00:12:20.644 }' 00:12:20.644 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.644 04:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.904 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.904 04:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.904 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:20.904 04:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.904 04:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.904 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:20.904 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:20.904 04:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.904 04:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.163 [2024-11-27 04:29:17.490309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:21.163 04:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.163 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:21.163 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.163 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.163 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:21.163 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.163 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.163 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.163 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.163 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.163 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.163 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.163 04:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.163 04:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.163 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.163 04:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.163 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.163 "name": "Existed_Raid", 00:12:21.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.163 "strip_size_kb": 64, 00:12:21.163 "state": "configuring", 00:12:21.163 "raid_level": "raid0", 00:12:21.163 "superblock": false, 00:12:21.163 "num_base_bdevs": 4, 00:12:21.163 "num_base_bdevs_discovered": 3, 00:12:21.163 "num_base_bdevs_operational": 4, 00:12:21.163 "base_bdevs_list": [ 00:12:21.163 { 00:12:21.163 "name": "BaseBdev1", 00:12:21.163 "uuid": "208c528e-11f2-4f9b-baaf-ef4c34abfdad", 00:12:21.163 "is_configured": true, 00:12:21.163 "data_offset": 0, 00:12:21.163 "data_size": 65536 00:12:21.163 }, 00:12:21.163 { 00:12:21.163 "name": null, 00:12:21.163 "uuid": "d529cfc3-da7d-43e5-8c22-d3b5027de3ae", 00:12:21.163 "is_configured": false, 00:12:21.163 "data_offset": 0, 00:12:21.163 "data_size": 65536 00:12:21.163 }, 00:12:21.163 { 00:12:21.163 "name": "BaseBdev3", 00:12:21.163 "uuid": "70f7c68f-a6c9-4f08-b653-72a62de16bae", 00:12:21.163 "is_configured": true, 00:12:21.163 "data_offset": 0, 00:12:21.163 "data_size": 65536 00:12:21.163 }, 00:12:21.163 { 00:12:21.163 "name": "BaseBdev4", 00:12:21.163 "uuid": "0cd7d3c5-3ff7-41e7-9895-a9bf651520cd", 00:12:21.163 "is_configured": true, 00:12:21.163 "data_offset": 0, 00:12:21.163 "data_size": 65536 00:12:21.163 } 00:12:21.163 ] 00:12:21.163 }' 00:12:21.163 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.163 04:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.422 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.422 04:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.422 04:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.422 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:21.422 04:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.422 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:21.422 04:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:21.422 04:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.422 04:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.422 [2024-11-27 04:29:17.957624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:21.682 04:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.682 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:21.682 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.683 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.683 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:21.683 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.683 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.683 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.683 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.683 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.683 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.683 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.683 04:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.683 04:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.683 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.683 04:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.683 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.683 "name": "Existed_Raid", 00:12:21.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.683 "strip_size_kb": 64, 00:12:21.683 "state": "configuring", 00:12:21.683 "raid_level": "raid0", 00:12:21.683 "superblock": false, 00:12:21.683 "num_base_bdevs": 4, 00:12:21.683 "num_base_bdevs_discovered": 2, 00:12:21.683 "num_base_bdevs_operational": 4, 00:12:21.683 "base_bdevs_list": [ 00:12:21.683 { 00:12:21.683 "name": null, 00:12:21.683 "uuid": "208c528e-11f2-4f9b-baaf-ef4c34abfdad", 00:12:21.683 "is_configured": false, 00:12:21.683 "data_offset": 0, 00:12:21.683 "data_size": 65536 00:12:21.683 }, 00:12:21.683 { 00:12:21.683 "name": null, 00:12:21.683 "uuid": "d529cfc3-da7d-43e5-8c22-d3b5027de3ae", 00:12:21.683 "is_configured": false, 00:12:21.683 "data_offset": 0, 00:12:21.683 "data_size": 65536 00:12:21.683 }, 00:12:21.683 { 00:12:21.683 "name": "BaseBdev3", 00:12:21.683 "uuid": "70f7c68f-a6c9-4f08-b653-72a62de16bae", 00:12:21.683 "is_configured": true, 00:12:21.683 "data_offset": 0, 00:12:21.683 "data_size": 65536 00:12:21.683 }, 00:12:21.683 { 00:12:21.683 "name": "BaseBdev4", 00:12:21.683 "uuid": "0cd7d3c5-3ff7-41e7-9895-a9bf651520cd", 00:12:21.683 "is_configured": true, 00:12:21.683 "data_offset": 0, 00:12:21.683 "data_size": 65536 00:12:21.683 } 00:12:21.683 ] 00:12:21.683 }' 00:12:21.683 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.683 04:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.252 [2024-11-27 04:29:18.617743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.252 "name": "Existed_Raid", 00:12:22.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.252 "strip_size_kb": 64, 00:12:22.252 "state": "configuring", 00:12:22.252 "raid_level": "raid0", 00:12:22.252 "superblock": false, 00:12:22.252 "num_base_bdevs": 4, 00:12:22.252 "num_base_bdevs_discovered": 3, 00:12:22.252 "num_base_bdevs_operational": 4, 00:12:22.252 "base_bdevs_list": [ 00:12:22.252 { 00:12:22.252 "name": null, 00:12:22.252 "uuid": "208c528e-11f2-4f9b-baaf-ef4c34abfdad", 00:12:22.252 "is_configured": false, 00:12:22.252 "data_offset": 0, 00:12:22.252 "data_size": 65536 00:12:22.252 }, 00:12:22.252 { 00:12:22.252 "name": "BaseBdev2", 00:12:22.252 "uuid": "d529cfc3-da7d-43e5-8c22-d3b5027de3ae", 00:12:22.252 "is_configured": true, 00:12:22.252 "data_offset": 0, 00:12:22.252 "data_size": 65536 00:12:22.252 }, 00:12:22.252 { 00:12:22.252 "name": "BaseBdev3", 00:12:22.252 "uuid": "70f7c68f-a6c9-4f08-b653-72a62de16bae", 00:12:22.252 "is_configured": true, 00:12:22.252 "data_offset": 0, 00:12:22.252 "data_size": 65536 00:12:22.252 }, 00:12:22.252 { 00:12:22.252 "name": "BaseBdev4", 00:12:22.252 "uuid": "0cd7d3c5-3ff7-41e7-9895-a9bf651520cd", 00:12:22.252 "is_configured": true, 00:12:22.252 "data_offset": 0, 00:12:22.252 "data_size": 65536 00:12:22.252 } 00:12:22.252 ] 00:12:22.252 }' 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.252 04:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.511 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:22.511 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.511 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.511 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.511 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 208c528e-11f2-4f9b-baaf-ef4c34abfdad 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.770 [2024-11-27 04:29:19.186996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:22.770 [2024-11-27 04:29:19.187055] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:22.770 [2024-11-27 04:29:19.187064] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:22.770 [2024-11-27 04:29:19.187398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:22.770 [2024-11-27 04:29:19.187603] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:22.770 [2024-11-27 04:29:19.187624] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:22.770 [2024-11-27 04:29:19.187903] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.770 NewBaseBdev 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.770 [ 00:12:22.770 { 00:12:22.770 "name": "NewBaseBdev", 00:12:22.770 "aliases": [ 00:12:22.770 "208c528e-11f2-4f9b-baaf-ef4c34abfdad" 00:12:22.770 ], 00:12:22.770 "product_name": "Malloc disk", 00:12:22.770 "block_size": 512, 00:12:22.770 "num_blocks": 65536, 00:12:22.770 "uuid": "208c528e-11f2-4f9b-baaf-ef4c34abfdad", 00:12:22.770 "assigned_rate_limits": { 00:12:22.770 "rw_ios_per_sec": 0, 00:12:22.770 "rw_mbytes_per_sec": 0, 00:12:22.770 "r_mbytes_per_sec": 0, 00:12:22.770 "w_mbytes_per_sec": 0 00:12:22.770 }, 00:12:22.770 "claimed": true, 00:12:22.770 "claim_type": "exclusive_write", 00:12:22.770 "zoned": false, 00:12:22.770 "supported_io_types": { 00:12:22.770 "read": true, 00:12:22.770 "write": true, 00:12:22.770 "unmap": true, 00:12:22.770 "flush": true, 00:12:22.770 "reset": true, 00:12:22.770 "nvme_admin": false, 00:12:22.770 "nvme_io": false, 00:12:22.770 "nvme_io_md": false, 00:12:22.770 "write_zeroes": true, 00:12:22.770 "zcopy": true, 00:12:22.770 "get_zone_info": false, 00:12:22.770 "zone_management": false, 00:12:22.770 "zone_append": false, 00:12:22.770 "compare": false, 00:12:22.770 "compare_and_write": false, 00:12:22.770 "abort": true, 00:12:22.770 "seek_hole": false, 00:12:22.770 "seek_data": false, 00:12:22.770 "copy": true, 00:12:22.770 "nvme_iov_md": false 00:12:22.770 }, 00:12:22.770 "memory_domains": [ 00:12:22.770 { 00:12:22.770 "dma_device_id": "system", 00:12:22.770 "dma_device_type": 1 00:12:22.770 }, 00:12:22.770 { 00:12:22.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.770 "dma_device_type": 2 00:12:22.770 } 00:12:22.770 ], 00:12:22.770 "driver_specific": {} 00:12:22.770 } 00:12:22.770 ] 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.770 "name": "Existed_Raid", 00:12:22.770 "uuid": "ad66a275-405d-4899-bdc4-812534e68bd8", 00:12:22.770 "strip_size_kb": 64, 00:12:22.770 "state": "online", 00:12:22.770 "raid_level": "raid0", 00:12:22.770 "superblock": false, 00:12:22.770 "num_base_bdevs": 4, 00:12:22.770 "num_base_bdevs_discovered": 4, 00:12:22.770 "num_base_bdevs_operational": 4, 00:12:22.770 "base_bdevs_list": [ 00:12:22.770 { 00:12:22.770 "name": "NewBaseBdev", 00:12:22.770 "uuid": "208c528e-11f2-4f9b-baaf-ef4c34abfdad", 00:12:22.770 "is_configured": true, 00:12:22.770 "data_offset": 0, 00:12:22.770 "data_size": 65536 00:12:22.770 }, 00:12:22.770 { 00:12:22.770 "name": "BaseBdev2", 00:12:22.770 "uuid": "d529cfc3-da7d-43e5-8c22-d3b5027de3ae", 00:12:22.770 "is_configured": true, 00:12:22.770 "data_offset": 0, 00:12:22.770 "data_size": 65536 00:12:22.770 }, 00:12:22.770 { 00:12:22.770 "name": "BaseBdev3", 00:12:22.770 "uuid": "70f7c68f-a6c9-4f08-b653-72a62de16bae", 00:12:22.770 "is_configured": true, 00:12:22.770 "data_offset": 0, 00:12:22.770 "data_size": 65536 00:12:22.770 }, 00:12:22.770 { 00:12:22.770 "name": "BaseBdev4", 00:12:22.770 "uuid": "0cd7d3c5-3ff7-41e7-9895-a9bf651520cd", 00:12:22.770 "is_configured": true, 00:12:22.770 "data_offset": 0, 00:12:22.770 "data_size": 65536 00:12:22.770 } 00:12:22.770 ] 00:12:22.770 }' 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.770 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.337 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:23.337 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:23.337 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:23.337 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:23.337 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:23.337 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:23.337 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:23.337 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:23.337 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.337 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.337 [2024-11-27 04:29:19.710639] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.337 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.337 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:23.337 "name": "Existed_Raid", 00:12:23.337 "aliases": [ 00:12:23.337 "ad66a275-405d-4899-bdc4-812534e68bd8" 00:12:23.337 ], 00:12:23.337 "product_name": "Raid Volume", 00:12:23.337 "block_size": 512, 00:12:23.337 "num_blocks": 262144, 00:12:23.337 "uuid": "ad66a275-405d-4899-bdc4-812534e68bd8", 00:12:23.337 "assigned_rate_limits": { 00:12:23.337 "rw_ios_per_sec": 0, 00:12:23.337 "rw_mbytes_per_sec": 0, 00:12:23.337 "r_mbytes_per_sec": 0, 00:12:23.337 "w_mbytes_per_sec": 0 00:12:23.337 }, 00:12:23.337 "claimed": false, 00:12:23.337 "zoned": false, 00:12:23.337 "supported_io_types": { 00:12:23.337 "read": true, 00:12:23.337 "write": true, 00:12:23.337 "unmap": true, 00:12:23.337 "flush": true, 00:12:23.337 "reset": true, 00:12:23.337 "nvme_admin": false, 00:12:23.337 "nvme_io": false, 00:12:23.337 "nvme_io_md": false, 00:12:23.337 "write_zeroes": true, 00:12:23.337 "zcopy": false, 00:12:23.337 "get_zone_info": false, 00:12:23.337 "zone_management": false, 00:12:23.337 "zone_append": false, 00:12:23.337 "compare": false, 00:12:23.337 "compare_and_write": false, 00:12:23.337 "abort": false, 00:12:23.337 "seek_hole": false, 00:12:23.337 "seek_data": false, 00:12:23.337 "copy": false, 00:12:23.337 "nvme_iov_md": false 00:12:23.337 }, 00:12:23.337 "memory_domains": [ 00:12:23.337 { 00:12:23.337 "dma_device_id": "system", 00:12:23.337 "dma_device_type": 1 00:12:23.337 }, 00:12:23.337 { 00:12:23.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.337 "dma_device_type": 2 00:12:23.337 }, 00:12:23.337 { 00:12:23.337 "dma_device_id": "system", 00:12:23.337 "dma_device_type": 1 00:12:23.337 }, 00:12:23.337 { 00:12:23.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.337 "dma_device_type": 2 00:12:23.337 }, 00:12:23.337 { 00:12:23.337 "dma_device_id": "system", 00:12:23.337 "dma_device_type": 1 00:12:23.337 }, 00:12:23.337 { 00:12:23.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.337 "dma_device_type": 2 00:12:23.337 }, 00:12:23.337 { 00:12:23.337 "dma_device_id": "system", 00:12:23.337 "dma_device_type": 1 00:12:23.337 }, 00:12:23.337 { 00:12:23.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.337 "dma_device_type": 2 00:12:23.337 } 00:12:23.337 ], 00:12:23.337 "driver_specific": { 00:12:23.337 "raid": { 00:12:23.337 "uuid": "ad66a275-405d-4899-bdc4-812534e68bd8", 00:12:23.337 "strip_size_kb": 64, 00:12:23.337 "state": "online", 00:12:23.337 "raid_level": "raid0", 00:12:23.337 "superblock": false, 00:12:23.337 "num_base_bdevs": 4, 00:12:23.337 "num_base_bdevs_discovered": 4, 00:12:23.337 "num_base_bdevs_operational": 4, 00:12:23.337 "base_bdevs_list": [ 00:12:23.337 { 00:12:23.337 "name": "NewBaseBdev", 00:12:23.337 "uuid": "208c528e-11f2-4f9b-baaf-ef4c34abfdad", 00:12:23.337 "is_configured": true, 00:12:23.337 "data_offset": 0, 00:12:23.337 "data_size": 65536 00:12:23.337 }, 00:12:23.337 { 00:12:23.337 "name": "BaseBdev2", 00:12:23.337 "uuid": "d529cfc3-da7d-43e5-8c22-d3b5027de3ae", 00:12:23.337 "is_configured": true, 00:12:23.337 "data_offset": 0, 00:12:23.337 "data_size": 65536 00:12:23.337 }, 00:12:23.337 { 00:12:23.337 "name": "BaseBdev3", 00:12:23.337 "uuid": "70f7c68f-a6c9-4f08-b653-72a62de16bae", 00:12:23.337 "is_configured": true, 00:12:23.337 "data_offset": 0, 00:12:23.337 "data_size": 65536 00:12:23.337 }, 00:12:23.337 { 00:12:23.337 "name": "BaseBdev4", 00:12:23.337 "uuid": "0cd7d3c5-3ff7-41e7-9895-a9bf651520cd", 00:12:23.337 "is_configured": true, 00:12:23.337 "data_offset": 0, 00:12:23.337 "data_size": 65536 00:12:23.337 } 00:12:23.337 ] 00:12:23.337 } 00:12:23.337 } 00:12:23.338 }' 00:12:23.338 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:23.338 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:23.338 BaseBdev2 00:12:23.338 BaseBdev3 00:12:23.338 BaseBdev4' 00:12:23.338 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.338 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:23.338 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.338 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:23.338 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.338 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.338 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.338 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.338 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.338 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.338 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.338 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.338 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:23.338 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.338 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.338 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.338 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.338 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.338 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.338 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.338 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:23.338 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.338 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.596 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.596 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.596 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.596 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.596 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:23.596 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.596 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.596 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.596 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.596 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.596 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.596 04:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:23.596 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.596 04:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.596 [2024-11-27 04:29:20.001705] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:23.596 [2024-11-27 04:29:20.001743] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.597 [2024-11-27 04:29:20.001831] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.597 [2024-11-27 04:29:20.001912] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.597 [2024-11-27 04:29:20.001931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:23.597 04:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.597 04:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69642 00:12:23.597 04:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69642 ']' 00:12:23.597 04:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69642 00:12:23.597 04:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:23.597 04:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:23.597 04:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69642 00:12:23.597 killing process with pid 69642 00:12:23.597 04:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:23.597 04:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:23.597 04:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69642' 00:12:23.597 04:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69642 00:12:23.597 [2024-11-27 04:29:20.046891] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:23.597 04:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69642 00:12:24.164 [2024-11-27 04:29:20.535025] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:25.541 00:12:25.541 real 0m12.119s 00:12:25.541 user 0m19.135s 00:12:25.541 sys 0m1.922s 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.541 ************************************ 00:12:25.541 END TEST raid_state_function_test 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.541 ************************************ 00:12:25.541 04:29:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:12:25.541 04:29:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:25.541 04:29:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.541 04:29:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:25.541 ************************************ 00:12:25.541 START TEST raid_state_function_test_sb 00:12:25.541 ************************************ 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:25.541 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:25.542 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70319 00:12:25.542 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:25.542 Process raid pid: 70319 00:12:25.542 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70319' 00:12:25.542 04:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70319 00:12:25.542 04:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70319 ']' 00:12:25.542 04:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.542 04:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:25.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.542 04:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.542 04:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:25.542 04:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.542 [2024-11-27 04:29:22.003056] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:12:25.542 [2024-11-27 04:29:22.003192] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.801 [2024-11-27 04:29:22.179750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.801 [2024-11-27 04:29:22.293026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.059 [2024-11-27 04:29:22.496413] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.059 [2024-11-27 04:29:22.496464] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.318 04:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:26.318 04:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:26.318 04:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:26.318 04:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.318 04:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.575 [2024-11-27 04:29:22.905766] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:26.575 [2024-11-27 04:29:22.905821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:26.575 [2024-11-27 04:29:22.905831] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:26.575 [2024-11-27 04:29:22.905842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:26.575 [2024-11-27 04:29:22.905849] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:26.575 [2024-11-27 04:29:22.905858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:26.575 [2024-11-27 04:29:22.905864] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:26.575 [2024-11-27 04:29:22.905873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:26.575 04:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.575 04:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:26.575 04:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.575 04:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.575 04:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:26.575 04:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.575 04:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.575 04:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.575 04:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.575 04:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.575 04:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.575 04:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.575 04:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.575 04:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.575 04:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.575 04:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.575 04:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.575 "name": "Existed_Raid", 00:12:26.575 "uuid": "1d67bed2-a096-425b-bf71-7a3413fc282b", 00:12:26.575 "strip_size_kb": 64, 00:12:26.575 "state": "configuring", 00:12:26.575 "raid_level": "raid0", 00:12:26.575 "superblock": true, 00:12:26.575 "num_base_bdevs": 4, 00:12:26.575 "num_base_bdevs_discovered": 0, 00:12:26.575 "num_base_bdevs_operational": 4, 00:12:26.575 "base_bdevs_list": [ 00:12:26.575 { 00:12:26.575 "name": "BaseBdev1", 00:12:26.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.575 "is_configured": false, 00:12:26.575 "data_offset": 0, 00:12:26.575 "data_size": 0 00:12:26.575 }, 00:12:26.575 { 00:12:26.575 "name": "BaseBdev2", 00:12:26.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.575 "is_configured": false, 00:12:26.575 "data_offset": 0, 00:12:26.575 "data_size": 0 00:12:26.575 }, 00:12:26.575 { 00:12:26.575 "name": "BaseBdev3", 00:12:26.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.575 "is_configured": false, 00:12:26.575 "data_offset": 0, 00:12:26.575 "data_size": 0 00:12:26.575 }, 00:12:26.575 { 00:12:26.575 "name": "BaseBdev4", 00:12:26.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.575 "is_configured": false, 00:12:26.575 "data_offset": 0, 00:12:26.575 "data_size": 0 00:12:26.575 } 00:12:26.575 ] 00:12:26.575 }' 00:12:26.575 04:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.575 04:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.833 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:26.833 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.833 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.833 [2024-11-27 04:29:23.372920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:26.833 [2024-11-27 04:29:23.372968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:26.833 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.833 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:26.833 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.833 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.833 [2024-11-27 04:29:23.384894] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:26.833 [2024-11-27 04:29:23.384939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:26.833 [2024-11-27 04:29:23.384965] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:26.833 [2024-11-27 04:29:23.384975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:26.833 [2024-11-27 04:29:23.384982] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:26.833 [2024-11-27 04:29:23.384993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:26.833 [2024-11-27 04:29:23.385000] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:26.833 [2024-11-27 04:29:23.385009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:26.833 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.833 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:26.833 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.833 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.092 [2024-11-27 04:29:23.433524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:27.092 BaseBdev1 00:12:27.092 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.092 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:27.092 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:27.092 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:27.092 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:27.092 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:27.092 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:27.092 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:27.092 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.092 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.092 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.092 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:27.092 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.092 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.092 [ 00:12:27.092 { 00:12:27.092 "name": "BaseBdev1", 00:12:27.092 "aliases": [ 00:12:27.092 "b3b33a99-ec08-48c0-827f-c754bfc7974c" 00:12:27.092 ], 00:12:27.092 "product_name": "Malloc disk", 00:12:27.092 "block_size": 512, 00:12:27.092 "num_blocks": 65536, 00:12:27.092 "uuid": "b3b33a99-ec08-48c0-827f-c754bfc7974c", 00:12:27.092 "assigned_rate_limits": { 00:12:27.092 "rw_ios_per_sec": 0, 00:12:27.092 "rw_mbytes_per_sec": 0, 00:12:27.092 "r_mbytes_per_sec": 0, 00:12:27.092 "w_mbytes_per_sec": 0 00:12:27.092 }, 00:12:27.092 "claimed": true, 00:12:27.092 "claim_type": "exclusive_write", 00:12:27.092 "zoned": false, 00:12:27.092 "supported_io_types": { 00:12:27.092 "read": true, 00:12:27.092 "write": true, 00:12:27.092 "unmap": true, 00:12:27.092 "flush": true, 00:12:27.092 "reset": true, 00:12:27.092 "nvme_admin": false, 00:12:27.092 "nvme_io": false, 00:12:27.092 "nvme_io_md": false, 00:12:27.092 "write_zeroes": true, 00:12:27.092 "zcopy": true, 00:12:27.092 "get_zone_info": false, 00:12:27.092 "zone_management": false, 00:12:27.092 "zone_append": false, 00:12:27.092 "compare": false, 00:12:27.092 "compare_and_write": false, 00:12:27.092 "abort": true, 00:12:27.092 "seek_hole": false, 00:12:27.092 "seek_data": false, 00:12:27.092 "copy": true, 00:12:27.092 "nvme_iov_md": false 00:12:27.093 }, 00:12:27.093 "memory_domains": [ 00:12:27.093 { 00:12:27.093 "dma_device_id": "system", 00:12:27.093 "dma_device_type": 1 00:12:27.093 }, 00:12:27.093 { 00:12:27.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.093 "dma_device_type": 2 00:12:27.093 } 00:12:27.093 ], 00:12:27.093 "driver_specific": {} 00:12:27.093 } 00:12:27.093 ] 00:12:27.093 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.093 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:27.093 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:27.093 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.093 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.093 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:27.093 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:27.093 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.093 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.093 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.093 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.093 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.093 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.093 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.093 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.093 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.093 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.093 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.093 "name": "Existed_Raid", 00:12:27.093 "uuid": "d31db3eb-eb88-4432-aa9c-76d4b724f29f", 00:12:27.093 "strip_size_kb": 64, 00:12:27.093 "state": "configuring", 00:12:27.093 "raid_level": "raid0", 00:12:27.093 "superblock": true, 00:12:27.093 "num_base_bdevs": 4, 00:12:27.093 "num_base_bdevs_discovered": 1, 00:12:27.093 "num_base_bdevs_operational": 4, 00:12:27.093 "base_bdevs_list": [ 00:12:27.093 { 00:12:27.093 "name": "BaseBdev1", 00:12:27.093 "uuid": "b3b33a99-ec08-48c0-827f-c754bfc7974c", 00:12:27.093 "is_configured": true, 00:12:27.093 "data_offset": 2048, 00:12:27.093 "data_size": 63488 00:12:27.093 }, 00:12:27.093 { 00:12:27.093 "name": "BaseBdev2", 00:12:27.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.093 "is_configured": false, 00:12:27.093 "data_offset": 0, 00:12:27.093 "data_size": 0 00:12:27.093 }, 00:12:27.093 { 00:12:27.093 "name": "BaseBdev3", 00:12:27.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.093 "is_configured": false, 00:12:27.093 "data_offset": 0, 00:12:27.093 "data_size": 0 00:12:27.093 }, 00:12:27.093 { 00:12:27.093 "name": "BaseBdev4", 00:12:27.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.093 "is_configured": false, 00:12:27.093 "data_offset": 0, 00:12:27.093 "data_size": 0 00:12:27.093 } 00:12:27.093 ] 00:12:27.093 }' 00:12:27.093 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.093 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.352 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:27.352 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.352 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.352 [2024-11-27 04:29:23.920786] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:27.352 [2024-11-27 04:29:23.920934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:27.352 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.352 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:27.352 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.352 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.352 [2024-11-27 04:29:23.928848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:27.352 [2024-11-27 04:29:23.930899] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:27.352 [2024-11-27 04:29:23.931005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:27.352 [2024-11-27 04:29:23.931046] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:27.352 [2024-11-27 04:29:23.931079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:27.352 [2024-11-27 04:29:23.931115] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:27.352 [2024-11-27 04:29:23.931174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:27.352 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.352 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:27.352 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:27.352 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:27.352 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.352 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.352 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:27.612 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:27.612 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.612 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.612 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.612 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.612 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.612 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.612 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.612 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.612 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.612 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.612 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.612 "name": "Existed_Raid", 00:12:27.612 "uuid": "59db3604-a565-4871-9151-2eb2a19a5df9", 00:12:27.612 "strip_size_kb": 64, 00:12:27.612 "state": "configuring", 00:12:27.612 "raid_level": "raid0", 00:12:27.612 "superblock": true, 00:12:27.612 "num_base_bdevs": 4, 00:12:27.612 "num_base_bdevs_discovered": 1, 00:12:27.612 "num_base_bdevs_operational": 4, 00:12:27.612 "base_bdevs_list": [ 00:12:27.612 { 00:12:27.612 "name": "BaseBdev1", 00:12:27.612 "uuid": "b3b33a99-ec08-48c0-827f-c754bfc7974c", 00:12:27.612 "is_configured": true, 00:12:27.612 "data_offset": 2048, 00:12:27.612 "data_size": 63488 00:12:27.612 }, 00:12:27.612 { 00:12:27.612 "name": "BaseBdev2", 00:12:27.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.612 "is_configured": false, 00:12:27.612 "data_offset": 0, 00:12:27.612 "data_size": 0 00:12:27.612 }, 00:12:27.612 { 00:12:27.612 "name": "BaseBdev3", 00:12:27.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.612 "is_configured": false, 00:12:27.612 "data_offset": 0, 00:12:27.612 "data_size": 0 00:12:27.612 }, 00:12:27.612 { 00:12:27.612 "name": "BaseBdev4", 00:12:27.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.612 "is_configured": false, 00:12:27.612 "data_offset": 0, 00:12:27.612 "data_size": 0 00:12:27.612 } 00:12:27.612 ] 00:12:27.612 }' 00:12:27.612 04:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.612 04:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.873 04:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:27.873 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.873 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.873 [2024-11-27 04:29:24.418282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:27.873 BaseBdev2 00:12:27.873 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.873 04:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:27.873 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:27.873 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:27.873 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:27.873 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:27.873 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:27.873 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:27.873 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.873 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.873 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.873 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:27.873 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.873 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.873 [ 00:12:27.873 { 00:12:27.873 "name": "BaseBdev2", 00:12:27.873 "aliases": [ 00:12:27.873 "e507e2d5-ed6f-4208-813e-75b20d9c075c" 00:12:27.873 ], 00:12:27.873 "product_name": "Malloc disk", 00:12:27.873 "block_size": 512, 00:12:27.873 "num_blocks": 65536, 00:12:27.873 "uuid": "e507e2d5-ed6f-4208-813e-75b20d9c075c", 00:12:27.873 "assigned_rate_limits": { 00:12:27.873 "rw_ios_per_sec": 0, 00:12:27.873 "rw_mbytes_per_sec": 0, 00:12:27.873 "r_mbytes_per_sec": 0, 00:12:27.873 "w_mbytes_per_sec": 0 00:12:27.873 }, 00:12:27.873 "claimed": true, 00:12:27.873 "claim_type": "exclusive_write", 00:12:27.873 "zoned": false, 00:12:27.873 "supported_io_types": { 00:12:27.873 "read": true, 00:12:27.873 "write": true, 00:12:27.873 "unmap": true, 00:12:27.873 "flush": true, 00:12:27.873 "reset": true, 00:12:27.873 "nvme_admin": false, 00:12:27.873 "nvme_io": false, 00:12:27.873 "nvme_io_md": false, 00:12:27.873 "write_zeroes": true, 00:12:27.873 "zcopy": true, 00:12:27.873 "get_zone_info": false, 00:12:27.873 "zone_management": false, 00:12:27.873 "zone_append": false, 00:12:27.873 "compare": false, 00:12:27.873 "compare_and_write": false, 00:12:27.873 "abort": true, 00:12:27.873 "seek_hole": false, 00:12:27.873 "seek_data": false, 00:12:27.873 "copy": true, 00:12:27.873 "nvme_iov_md": false 00:12:27.873 }, 00:12:27.873 "memory_domains": [ 00:12:27.873 { 00:12:27.873 "dma_device_id": "system", 00:12:27.873 "dma_device_type": 1 00:12:27.873 }, 00:12:27.873 { 00:12:27.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.873 "dma_device_type": 2 00:12:27.873 } 00:12:27.873 ], 00:12:27.873 "driver_specific": {} 00:12:27.873 } 00:12:27.873 ] 00:12:27.873 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.873 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:27.873 04:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:27.873 04:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:27.873 04:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:28.132 04:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.132 04:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.132 04:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:28.132 04:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.132 04:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.132 04:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.132 04:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.132 04:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.132 04:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.132 04:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.132 04:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.132 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.132 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.132 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.132 04:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.132 "name": "Existed_Raid", 00:12:28.132 "uuid": "59db3604-a565-4871-9151-2eb2a19a5df9", 00:12:28.132 "strip_size_kb": 64, 00:12:28.132 "state": "configuring", 00:12:28.132 "raid_level": "raid0", 00:12:28.132 "superblock": true, 00:12:28.132 "num_base_bdevs": 4, 00:12:28.132 "num_base_bdevs_discovered": 2, 00:12:28.132 "num_base_bdevs_operational": 4, 00:12:28.132 "base_bdevs_list": [ 00:12:28.132 { 00:12:28.132 "name": "BaseBdev1", 00:12:28.132 "uuid": "b3b33a99-ec08-48c0-827f-c754bfc7974c", 00:12:28.132 "is_configured": true, 00:12:28.132 "data_offset": 2048, 00:12:28.132 "data_size": 63488 00:12:28.132 }, 00:12:28.132 { 00:12:28.132 "name": "BaseBdev2", 00:12:28.132 "uuid": "e507e2d5-ed6f-4208-813e-75b20d9c075c", 00:12:28.132 "is_configured": true, 00:12:28.132 "data_offset": 2048, 00:12:28.132 "data_size": 63488 00:12:28.132 }, 00:12:28.132 { 00:12:28.132 "name": "BaseBdev3", 00:12:28.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.132 "is_configured": false, 00:12:28.132 "data_offset": 0, 00:12:28.132 "data_size": 0 00:12:28.132 }, 00:12:28.132 { 00:12:28.132 "name": "BaseBdev4", 00:12:28.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.132 "is_configured": false, 00:12:28.132 "data_offset": 0, 00:12:28.132 "data_size": 0 00:12:28.132 } 00:12:28.132 ] 00:12:28.132 }' 00:12:28.133 04:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.133 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.391 04:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:28.391 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.391 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.651 [2024-11-27 04:29:24.994514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:28.651 BaseBdev3 00:12:28.651 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.651 04:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:28.651 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:28.651 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:28.651 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:28.651 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:28.651 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:28.651 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:28.651 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.651 04:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.651 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.651 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:28.651 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.651 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.651 [ 00:12:28.651 { 00:12:28.651 "name": "BaseBdev3", 00:12:28.651 "aliases": [ 00:12:28.651 "e3e4a82a-968c-4878-8c8e-6492a7080ec6" 00:12:28.651 ], 00:12:28.651 "product_name": "Malloc disk", 00:12:28.651 "block_size": 512, 00:12:28.651 "num_blocks": 65536, 00:12:28.651 "uuid": "e3e4a82a-968c-4878-8c8e-6492a7080ec6", 00:12:28.651 "assigned_rate_limits": { 00:12:28.651 "rw_ios_per_sec": 0, 00:12:28.651 "rw_mbytes_per_sec": 0, 00:12:28.651 "r_mbytes_per_sec": 0, 00:12:28.651 "w_mbytes_per_sec": 0 00:12:28.651 }, 00:12:28.651 "claimed": true, 00:12:28.651 "claim_type": "exclusive_write", 00:12:28.651 "zoned": false, 00:12:28.651 "supported_io_types": { 00:12:28.651 "read": true, 00:12:28.651 "write": true, 00:12:28.651 "unmap": true, 00:12:28.651 "flush": true, 00:12:28.651 "reset": true, 00:12:28.651 "nvme_admin": false, 00:12:28.651 "nvme_io": false, 00:12:28.651 "nvme_io_md": false, 00:12:28.651 "write_zeroes": true, 00:12:28.651 "zcopy": true, 00:12:28.651 "get_zone_info": false, 00:12:28.651 "zone_management": false, 00:12:28.651 "zone_append": false, 00:12:28.651 "compare": false, 00:12:28.651 "compare_and_write": false, 00:12:28.651 "abort": true, 00:12:28.651 "seek_hole": false, 00:12:28.651 "seek_data": false, 00:12:28.651 "copy": true, 00:12:28.651 "nvme_iov_md": false 00:12:28.651 }, 00:12:28.651 "memory_domains": [ 00:12:28.651 { 00:12:28.651 "dma_device_id": "system", 00:12:28.651 "dma_device_type": 1 00:12:28.651 }, 00:12:28.651 { 00:12:28.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.651 "dma_device_type": 2 00:12:28.651 } 00:12:28.651 ], 00:12:28.651 "driver_specific": {} 00:12:28.651 } 00:12:28.651 ] 00:12:28.651 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.651 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:28.651 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:28.651 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:28.651 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:28.651 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.651 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.651 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:28.651 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.651 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.651 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.651 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.651 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.651 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.651 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.651 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.651 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.651 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.651 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.651 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.651 "name": "Existed_Raid", 00:12:28.651 "uuid": "59db3604-a565-4871-9151-2eb2a19a5df9", 00:12:28.651 "strip_size_kb": 64, 00:12:28.651 "state": "configuring", 00:12:28.651 "raid_level": "raid0", 00:12:28.651 "superblock": true, 00:12:28.651 "num_base_bdevs": 4, 00:12:28.651 "num_base_bdevs_discovered": 3, 00:12:28.651 "num_base_bdevs_operational": 4, 00:12:28.651 "base_bdevs_list": [ 00:12:28.651 { 00:12:28.651 "name": "BaseBdev1", 00:12:28.651 "uuid": "b3b33a99-ec08-48c0-827f-c754bfc7974c", 00:12:28.651 "is_configured": true, 00:12:28.651 "data_offset": 2048, 00:12:28.651 "data_size": 63488 00:12:28.651 }, 00:12:28.651 { 00:12:28.651 "name": "BaseBdev2", 00:12:28.651 "uuid": "e507e2d5-ed6f-4208-813e-75b20d9c075c", 00:12:28.651 "is_configured": true, 00:12:28.651 "data_offset": 2048, 00:12:28.651 "data_size": 63488 00:12:28.651 }, 00:12:28.651 { 00:12:28.651 "name": "BaseBdev3", 00:12:28.651 "uuid": "e3e4a82a-968c-4878-8c8e-6492a7080ec6", 00:12:28.651 "is_configured": true, 00:12:28.651 "data_offset": 2048, 00:12:28.651 "data_size": 63488 00:12:28.651 }, 00:12:28.651 { 00:12:28.651 "name": "BaseBdev4", 00:12:28.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.651 "is_configured": false, 00:12:28.651 "data_offset": 0, 00:12:28.651 "data_size": 0 00:12:28.651 } 00:12:28.651 ] 00:12:28.651 }' 00:12:28.651 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.651 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.909 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:28.909 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.909 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.909 [2024-11-27 04:29:25.492678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:28.909 [2024-11-27 04:29:25.492992] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:28.909 [2024-11-27 04:29:25.493014] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:28.909 [2024-11-27 04:29:25.493360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:28.909 [2024-11-27 04:29:25.493552] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:28.909 [2024-11-27 04:29:25.493572] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:28.909 BaseBdev4 00:12:28.909 [2024-11-27 04:29:25.493757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.167 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.167 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:29.167 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:29.167 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:29.167 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:29.167 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:29.167 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:29.167 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:29.167 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.167 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.167 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.167 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:29.167 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.168 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.168 [ 00:12:29.168 { 00:12:29.168 "name": "BaseBdev4", 00:12:29.168 "aliases": [ 00:12:29.168 "a81f8ce9-4d78-47ca-b295-eaedad870bab" 00:12:29.168 ], 00:12:29.168 "product_name": "Malloc disk", 00:12:29.168 "block_size": 512, 00:12:29.168 "num_blocks": 65536, 00:12:29.168 "uuid": "a81f8ce9-4d78-47ca-b295-eaedad870bab", 00:12:29.168 "assigned_rate_limits": { 00:12:29.168 "rw_ios_per_sec": 0, 00:12:29.168 "rw_mbytes_per_sec": 0, 00:12:29.168 "r_mbytes_per_sec": 0, 00:12:29.168 "w_mbytes_per_sec": 0 00:12:29.168 }, 00:12:29.168 "claimed": true, 00:12:29.168 "claim_type": "exclusive_write", 00:12:29.168 "zoned": false, 00:12:29.168 "supported_io_types": { 00:12:29.168 "read": true, 00:12:29.168 "write": true, 00:12:29.168 "unmap": true, 00:12:29.168 "flush": true, 00:12:29.168 "reset": true, 00:12:29.168 "nvme_admin": false, 00:12:29.168 "nvme_io": false, 00:12:29.168 "nvme_io_md": false, 00:12:29.168 "write_zeroes": true, 00:12:29.168 "zcopy": true, 00:12:29.168 "get_zone_info": false, 00:12:29.168 "zone_management": false, 00:12:29.168 "zone_append": false, 00:12:29.168 "compare": false, 00:12:29.168 "compare_and_write": false, 00:12:29.168 "abort": true, 00:12:29.168 "seek_hole": false, 00:12:29.168 "seek_data": false, 00:12:29.168 "copy": true, 00:12:29.168 "nvme_iov_md": false 00:12:29.168 }, 00:12:29.168 "memory_domains": [ 00:12:29.168 { 00:12:29.168 "dma_device_id": "system", 00:12:29.168 "dma_device_type": 1 00:12:29.168 }, 00:12:29.168 { 00:12:29.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.168 "dma_device_type": 2 00:12:29.168 } 00:12:29.168 ], 00:12:29.168 "driver_specific": {} 00:12:29.168 } 00:12:29.168 ] 00:12:29.168 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.168 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:29.168 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:29.168 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:29.168 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:29.168 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.168 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.168 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:29.168 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.168 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.168 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.168 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.168 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.168 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.168 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.168 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.168 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.168 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.168 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.168 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.168 "name": "Existed_Raid", 00:12:29.168 "uuid": "59db3604-a565-4871-9151-2eb2a19a5df9", 00:12:29.168 "strip_size_kb": 64, 00:12:29.168 "state": "online", 00:12:29.168 "raid_level": "raid0", 00:12:29.168 "superblock": true, 00:12:29.168 "num_base_bdevs": 4, 00:12:29.168 "num_base_bdevs_discovered": 4, 00:12:29.168 "num_base_bdevs_operational": 4, 00:12:29.168 "base_bdevs_list": [ 00:12:29.168 { 00:12:29.168 "name": "BaseBdev1", 00:12:29.168 "uuid": "b3b33a99-ec08-48c0-827f-c754bfc7974c", 00:12:29.168 "is_configured": true, 00:12:29.168 "data_offset": 2048, 00:12:29.168 "data_size": 63488 00:12:29.168 }, 00:12:29.168 { 00:12:29.168 "name": "BaseBdev2", 00:12:29.168 "uuid": "e507e2d5-ed6f-4208-813e-75b20d9c075c", 00:12:29.168 "is_configured": true, 00:12:29.168 "data_offset": 2048, 00:12:29.168 "data_size": 63488 00:12:29.168 }, 00:12:29.168 { 00:12:29.168 "name": "BaseBdev3", 00:12:29.168 "uuid": "e3e4a82a-968c-4878-8c8e-6492a7080ec6", 00:12:29.168 "is_configured": true, 00:12:29.168 "data_offset": 2048, 00:12:29.168 "data_size": 63488 00:12:29.168 }, 00:12:29.168 { 00:12:29.168 "name": "BaseBdev4", 00:12:29.168 "uuid": "a81f8ce9-4d78-47ca-b295-eaedad870bab", 00:12:29.168 "is_configured": true, 00:12:29.168 "data_offset": 2048, 00:12:29.168 "data_size": 63488 00:12:29.168 } 00:12:29.168 ] 00:12:29.168 }' 00:12:29.168 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.168 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.427 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:29.427 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:29.427 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:29.427 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:29.427 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:29.427 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:29.427 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:29.427 04:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:29.427 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.427 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.427 [2024-11-27 04:29:25.984338] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:29.427 04:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.427 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:29.427 "name": "Existed_Raid", 00:12:29.427 "aliases": [ 00:12:29.427 "59db3604-a565-4871-9151-2eb2a19a5df9" 00:12:29.427 ], 00:12:29.427 "product_name": "Raid Volume", 00:12:29.427 "block_size": 512, 00:12:29.427 "num_blocks": 253952, 00:12:29.427 "uuid": "59db3604-a565-4871-9151-2eb2a19a5df9", 00:12:29.427 "assigned_rate_limits": { 00:12:29.427 "rw_ios_per_sec": 0, 00:12:29.427 "rw_mbytes_per_sec": 0, 00:12:29.427 "r_mbytes_per_sec": 0, 00:12:29.427 "w_mbytes_per_sec": 0 00:12:29.427 }, 00:12:29.427 "claimed": false, 00:12:29.427 "zoned": false, 00:12:29.427 "supported_io_types": { 00:12:29.427 "read": true, 00:12:29.427 "write": true, 00:12:29.427 "unmap": true, 00:12:29.427 "flush": true, 00:12:29.427 "reset": true, 00:12:29.427 "nvme_admin": false, 00:12:29.427 "nvme_io": false, 00:12:29.427 "nvme_io_md": false, 00:12:29.427 "write_zeroes": true, 00:12:29.427 "zcopy": false, 00:12:29.427 "get_zone_info": false, 00:12:29.427 "zone_management": false, 00:12:29.427 "zone_append": false, 00:12:29.427 "compare": false, 00:12:29.427 "compare_and_write": false, 00:12:29.427 "abort": false, 00:12:29.427 "seek_hole": false, 00:12:29.427 "seek_data": false, 00:12:29.427 "copy": false, 00:12:29.427 "nvme_iov_md": false 00:12:29.427 }, 00:12:29.427 "memory_domains": [ 00:12:29.427 { 00:12:29.427 "dma_device_id": "system", 00:12:29.427 "dma_device_type": 1 00:12:29.427 }, 00:12:29.427 { 00:12:29.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.427 "dma_device_type": 2 00:12:29.427 }, 00:12:29.427 { 00:12:29.427 "dma_device_id": "system", 00:12:29.427 "dma_device_type": 1 00:12:29.427 }, 00:12:29.427 { 00:12:29.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.427 "dma_device_type": 2 00:12:29.427 }, 00:12:29.427 { 00:12:29.427 "dma_device_id": "system", 00:12:29.427 "dma_device_type": 1 00:12:29.427 }, 00:12:29.427 { 00:12:29.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.427 "dma_device_type": 2 00:12:29.427 }, 00:12:29.427 { 00:12:29.427 "dma_device_id": "system", 00:12:29.427 "dma_device_type": 1 00:12:29.427 }, 00:12:29.427 { 00:12:29.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.427 "dma_device_type": 2 00:12:29.427 } 00:12:29.427 ], 00:12:29.427 "driver_specific": { 00:12:29.427 "raid": { 00:12:29.427 "uuid": "59db3604-a565-4871-9151-2eb2a19a5df9", 00:12:29.427 "strip_size_kb": 64, 00:12:29.427 "state": "online", 00:12:29.427 "raid_level": "raid0", 00:12:29.427 "superblock": true, 00:12:29.427 "num_base_bdevs": 4, 00:12:29.427 "num_base_bdevs_discovered": 4, 00:12:29.427 "num_base_bdevs_operational": 4, 00:12:29.427 "base_bdevs_list": [ 00:12:29.427 { 00:12:29.427 "name": "BaseBdev1", 00:12:29.427 "uuid": "b3b33a99-ec08-48c0-827f-c754bfc7974c", 00:12:29.427 "is_configured": true, 00:12:29.427 "data_offset": 2048, 00:12:29.427 "data_size": 63488 00:12:29.427 }, 00:12:29.427 { 00:12:29.427 "name": "BaseBdev2", 00:12:29.427 "uuid": "e507e2d5-ed6f-4208-813e-75b20d9c075c", 00:12:29.427 "is_configured": true, 00:12:29.427 "data_offset": 2048, 00:12:29.427 "data_size": 63488 00:12:29.427 }, 00:12:29.427 { 00:12:29.427 "name": "BaseBdev3", 00:12:29.427 "uuid": "e3e4a82a-968c-4878-8c8e-6492a7080ec6", 00:12:29.427 "is_configured": true, 00:12:29.427 "data_offset": 2048, 00:12:29.427 "data_size": 63488 00:12:29.427 }, 00:12:29.427 { 00:12:29.427 "name": "BaseBdev4", 00:12:29.427 "uuid": "a81f8ce9-4d78-47ca-b295-eaedad870bab", 00:12:29.427 "is_configured": true, 00:12:29.427 "data_offset": 2048, 00:12:29.427 "data_size": 63488 00:12:29.427 } 00:12:29.427 ] 00:12:29.427 } 00:12:29.427 } 00:12:29.427 }' 00:12:29.427 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:29.686 BaseBdev2 00:12:29.686 BaseBdev3 00:12:29.686 BaseBdev4' 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.686 04:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.945 [2024-11-27 04:29:26.295636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:29.945 [2024-11-27 04:29:26.295675] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.945 [2024-11-27 04:29:26.295732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.945 "name": "Existed_Raid", 00:12:29.945 "uuid": "59db3604-a565-4871-9151-2eb2a19a5df9", 00:12:29.945 "strip_size_kb": 64, 00:12:29.945 "state": "offline", 00:12:29.945 "raid_level": "raid0", 00:12:29.945 "superblock": true, 00:12:29.945 "num_base_bdevs": 4, 00:12:29.945 "num_base_bdevs_discovered": 3, 00:12:29.945 "num_base_bdevs_operational": 3, 00:12:29.945 "base_bdevs_list": [ 00:12:29.945 { 00:12:29.945 "name": null, 00:12:29.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.945 "is_configured": false, 00:12:29.945 "data_offset": 0, 00:12:29.945 "data_size": 63488 00:12:29.945 }, 00:12:29.945 { 00:12:29.945 "name": "BaseBdev2", 00:12:29.945 "uuid": "e507e2d5-ed6f-4208-813e-75b20d9c075c", 00:12:29.945 "is_configured": true, 00:12:29.945 "data_offset": 2048, 00:12:29.945 "data_size": 63488 00:12:29.945 }, 00:12:29.945 { 00:12:29.945 "name": "BaseBdev3", 00:12:29.945 "uuid": "e3e4a82a-968c-4878-8c8e-6492a7080ec6", 00:12:29.945 "is_configured": true, 00:12:29.945 "data_offset": 2048, 00:12:29.945 "data_size": 63488 00:12:29.945 }, 00:12:29.945 { 00:12:29.945 "name": "BaseBdev4", 00:12:29.945 "uuid": "a81f8ce9-4d78-47ca-b295-eaedad870bab", 00:12:29.945 "is_configured": true, 00:12:29.945 "data_offset": 2048, 00:12:29.945 "data_size": 63488 00:12:29.945 } 00:12:29.945 ] 00:12:29.945 }' 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.945 04:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.538 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:30.538 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:30.538 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:30.538 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.538 04:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.538 04:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.538 04:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.538 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:30.538 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:30.538 04:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:30.538 04:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.538 04:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.538 [2024-11-27 04:29:26.920274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:30.538 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.538 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:30.538 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:30.538 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.538 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.538 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.538 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:30.538 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.538 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:30.538 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:30.538 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:30.538 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.538 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.538 [2024-11-27 04:29:27.085352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:30.797 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.797 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:30.797 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:30.797 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.797 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.797 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.797 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:30.797 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.797 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:30.797 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:30.797 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:30.797 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.797 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.797 [2024-11-27 04:29:27.254537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:30.797 [2024-11-27 04:29:27.254602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:30.797 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.797 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:30.797 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:30.797 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.797 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.797 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:30.797 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.056 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.056 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:31.056 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.057 BaseBdev2 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.057 [ 00:12:31.057 { 00:12:31.057 "name": "BaseBdev2", 00:12:31.057 "aliases": [ 00:12:31.057 "173edcf5-9dc7-4f46-8769-7bc76135313e" 00:12:31.057 ], 00:12:31.057 "product_name": "Malloc disk", 00:12:31.057 "block_size": 512, 00:12:31.057 "num_blocks": 65536, 00:12:31.057 "uuid": "173edcf5-9dc7-4f46-8769-7bc76135313e", 00:12:31.057 "assigned_rate_limits": { 00:12:31.057 "rw_ios_per_sec": 0, 00:12:31.057 "rw_mbytes_per_sec": 0, 00:12:31.057 "r_mbytes_per_sec": 0, 00:12:31.057 "w_mbytes_per_sec": 0 00:12:31.057 }, 00:12:31.057 "claimed": false, 00:12:31.057 "zoned": false, 00:12:31.057 "supported_io_types": { 00:12:31.057 "read": true, 00:12:31.057 "write": true, 00:12:31.057 "unmap": true, 00:12:31.057 "flush": true, 00:12:31.057 "reset": true, 00:12:31.057 "nvme_admin": false, 00:12:31.057 "nvme_io": false, 00:12:31.057 "nvme_io_md": false, 00:12:31.057 "write_zeroes": true, 00:12:31.057 "zcopy": true, 00:12:31.057 "get_zone_info": false, 00:12:31.057 "zone_management": false, 00:12:31.057 "zone_append": false, 00:12:31.057 "compare": false, 00:12:31.057 "compare_and_write": false, 00:12:31.057 "abort": true, 00:12:31.057 "seek_hole": false, 00:12:31.057 "seek_data": false, 00:12:31.057 "copy": true, 00:12:31.057 "nvme_iov_md": false 00:12:31.057 }, 00:12:31.057 "memory_domains": [ 00:12:31.057 { 00:12:31.057 "dma_device_id": "system", 00:12:31.057 "dma_device_type": 1 00:12:31.057 }, 00:12:31.057 { 00:12:31.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.057 "dma_device_type": 2 00:12:31.057 } 00:12:31.057 ], 00:12:31.057 "driver_specific": {} 00:12:31.057 } 00:12:31.057 ] 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.057 BaseBdev3 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.057 [ 00:12:31.057 { 00:12:31.057 "name": "BaseBdev3", 00:12:31.057 "aliases": [ 00:12:31.057 "87c25235-fa10-471a-acc0-fb21ab656f01" 00:12:31.057 ], 00:12:31.057 "product_name": "Malloc disk", 00:12:31.057 "block_size": 512, 00:12:31.057 "num_blocks": 65536, 00:12:31.057 "uuid": "87c25235-fa10-471a-acc0-fb21ab656f01", 00:12:31.057 "assigned_rate_limits": { 00:12:31.057 "rw_ios_per_sec": 0, 00:12:31.057 "rw_mbytes_per_sec": 0, 00:12:31.057 "r_mbytes_per_sec": 0, 00:12:31.057 "w_mbytes_per_sec": 0 00:12:31.057 }, 00:12:31.057 "claimed": false, 00:12:31.057 "zoned": false, 00:12:31.057 "supported_io_types": { 00:12:31.057 "read": true, 00:12:31.057 "write": true, 00:12:31.057 "unmap": true, 00:12:31.057 "flush": true, 00:12:31.057 "reset": true, 00:12:31.057 "nvme_admin": false, 00:12:31.057 "nvme_io": false, 00:12:31.057 "nvme_io_md": false, 00:12:31.057 "write_zeroes": true, 00:12:31.057 "zcopy": true, 00:12:31.057 "get_zone_info": false, 00:12:31.057 "zone_management": false, 00:12:31.057 "zone_append": false, 00:12:31.057 "compare": false, 00:12:31.057 "compare_and_write": false, 00:12:31.057 "abort": true, 00:12:31.057 "seek_hole": false, 00:12:31.057 "seek_data": false, 00:12:31.057 "copy": true, 00:12:31.057 "nvme_iov_md": false 00:12:31.057 }, 00:12:31.057 "memory_domains": [ 00:12:31.057 { 00:12:31.057 "dma_device_id": "system", 00:12:31.057 "dma_device_type": 1 00:12:31.057 }, 00:12:31.057 { 00:12:31.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.057 "dma_device_type": 2 00:12:31.057 } 00:12:31.057 ], 00:12:31.057 "driver_specific": {} 00:12:31.057 } 00:12:31.057 ] 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.057 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.315 BaseBdev4 00:12:31.315 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.316 [ 00:12:31.316 { 00:12:31.316 "name": "BaseBdev4", 00:12:31.316 "aliases": [ 00:12:31.316 "32c73362-094e-49a5-95f4-6fc06d9faf40" 00:12:31.316 ], 00:12:31.316 "product_name": "Malloc disk", 00:12:31.316 "block_size": 512, 00:12:31.316 "num_blocks": 65536, 00:12:31.316 "uuid": "32c73362-094e-49a5-95f4-6fc06d9faf40", 00:12:31.316 "assigned_rate_limits": { 00:12:31.316 "rw_ios_per_sec": 0, 00:12:31.316 "rw_mbytes_per_sec": 0, 00:12:31.316 "r_mbytes_per_sec": 0, 00:12:31.316 "w_mbytes_per_sec": 0 00:12:31.316 }, 00:12:31.316 "claimed": false, 00:12:31.316 "zoned": false, 00:12:31.316 "supported_io_types": { 00:12:31.316 "read": true, 00:12:31.316 "write": true, 00:12:31.316 "unmap": true, 00:12:31.316 "flush": true, 00:12:31.316 "reset": true, 00:12:31.316 "nvme_admin": false, 00:12:31.316 "nvme_io": false, 00:12:31.316 "nvme_io_md": false, 00:12:31.316 "write_zeroes": true, 00:12:31.316 "zcopy": true, 00:12:31.316 "get_zone_info": false, 00:12:31.316 "zone_management": false, 00:12:31.316 "zone_append": false, 00:12:31.316 "compare": false, 00:12:31.316 "compare_and_write": false, 00:12:31.316 "abort": true, 00:12:31.316 "seek_hole": false, 00:12:31.316 "seek_data": false, 00:12:31.316 "copy": true, 00:12:31.316 "nvme_iov_md": false 00:12:31.316 }, 00:12:31.316 "memory_domains": [ 00:12:31.316 { 00:12:31.316 "dma_device_id": "system", 00:12:31.316 "dma_device_type": 1 00:12:31.316 }, 00:12:31.316 { 00:12:31.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.316 "dma_device_type": 2 00:12:31.316 } 00:12:31.316 ], 00:12:31.316 "driver_specific": {} 00:12:31.316 } 00:12:31.316 ] 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.316 [2024-11-27 04:29:27.684766] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:31.316 [2024-11-27 04:29:27.684820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:31.316 [2024-11-27 04:29:27.684846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:31.316 [2024-11-27 04:29:27.686997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:31.316 [2024-11-27 04:29:27.687061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.316 "name": "Existed_Raid", 00:12:31.316 "uuid": "6942968e-7285-4c12-bb04-6a9c0a33cabd", 00:12:31.316 "strip_size_kb": 64, 00:12:31.316 "state": "configuring", 00:12:31.316 "raid_level": "raid0", 00:12:31.316 "superblock": true, 00:12:31.316 "num_base_bdevs": 4, 00:12:31.316 "num_base_bdevs_discovered": 3, 00:12:31.316 "num_base_bdevs_operational": 4, 00:12:31.316 "base_bdevs_list": [ 00:12:31.316 { 00:12:31.316 "name": "BaseBdev1", 00:12:31.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.316 "is_configured": false, 00:12:31.316 "data_offset": 0, 00:12:31.316 "data_size": 0 00:12:31.316 }, 00:12:31.316 { 00:12:31.316 "name": "BaseBdev2", 00:12:31.316 "uuid": "173edcf5-9dc7-4f46-8769-7bc76135313e", 00:12:31.316 "is_configured": true, 00:12:31.316 "data_offset": 2048, 00:12:31.316 "data_size": 63488 00:12:31.316 }, 00:12:31.316 { 00:12:31.316 "name": "BaseBdev3", 00:12:31.316 "uuid": "87c25235-fa10-471a-acc0-fb21ab656f01", 00:12:31.316 "is_configured": true, 00:12:31.316 "data_offset": 2048, 00:12:31.316 "data_size": 63488 00:12:31.316 }, 00:12:31.316 { 00:12:31.316 "name": "BaseBdev4", 00:12:31.316 "uuid": "32c73362-094e-49a5-95f4-6fc06d9faf40", 00:12:31.316 "is_configured": true, 00:12:31.316 "data_offset": 2048, 00:12:31.316 "data_size": 63488 00:12:31.316 } 00:12:31.316 ] 00:12:31.316 }' 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.316 04:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.575 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:31.575 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.575 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.575 [2024-11-27 04:29:28.124045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:31.575 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.575 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:31.575 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.575 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.575 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:31.575 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.575 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.575 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.575 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.575 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.575 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.575 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.575 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.575 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.575 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.575 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.834 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.834 "name": "Existed_Raid", 00:12:31.834 "uuid": "6942968e-7285-4c12-bb04-6a9c0a33cabd", 00:12:31.834 "strip_size_kb": 64, 00:12:31.834 "state": "configuring", 00:12:31.834 "raid_level": "raid0", 00:12:31.834 "superblock": true, 00:12:31.834 "num_base_bdevs": 4, 00:12:31.834 "num_base_bdevs_discovered": 2, 00:12:31.834 "num_base_bdevs_operational": 4, 00:12:31.834 "base_bdevs_list": [ 00:12:31.834 { 00:12:31.834 "name": "BaseBdev1", 00:12:31.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.834 "is_configured": false, 00:12:31.834 "data_offset": 0, 00:12:31.834 "data_size": 0 00:12:31.834 }, 00:12:31.834 { 00:12:31.834 "name": null, 00:12:31.834 "uuid": "173edcf5-9dc7-4f46-8769-7bc76135313e", 00:12:31.834 "is_configured": false, 00:12:31.834 "data_offset": 0, 00:12:31.834 "data_size": 63488 00:12:31.834 }, 00:12:31.834 { 00:12:31.834 "name": "BaseBdev3", 00:12:31.834 "uuid": "87c25235-fa10-471a-acc0-fb21ab656f01", 00:12:31.834 "is_configured": true, 00:12:31.834 "data_offset": 2048, 00:12:31.834 "data_size": 63488 00:12:31.834 }, 00:12:31.834 { 00:12:31.834 "name": "BaseBdev4", 00:12:31.834 "uuid": "32c73362-094e-49a5-95f4-6fc06d9faf40", 00:12:31.834 "is_configured": true, 00:12:31.834 "data_offset": 2048, 00:12:31.834 "data_size": 63488 00:12:31.834 } 00:12:31.834 ] 00:12:31.834 }' 00:12:31.835 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.835 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.093 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.093 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:32.093 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.093 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.093 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.093 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:32.093 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:32.093 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.093 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.093 [2024-11-27 04:29:28.651173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.093 BaseBdev1 00:12:32.093 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.093 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:32.093 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:32.093 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:32.093 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:32.093 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:32.093 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:32.093 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:32.093 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.093 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.093 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.093 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:32.093 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.093 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.093 [ 00:12:32.093 { 00:12:32.093 "name": "BaseBdev1", 00:12:32.093 "aliases": [ 00:12:32.093 "9ae39aed-fc74-45c4-8eba-dba6a913f056" 00:12:32.093 ], 00:12:32.093 "product_name": "Malloc disk", 00:12:32.093 "block_size": 512, 00:12:32.093 "num_blocks": 65536, 00:12:32.352 "uuid": "9ae39aed-fc74-45c4-8eba-dba6a913f056", 00:12:32.352 "assigned_rate_limits": { 00:12:32.352 "rw_ios_per_sec": 0, 00:12:32.352 "rw_mbytes_per_sec": 0, 00:12:32.352 "r_mbytes_per_sec": 0, 00:12:32.352 "w_mbytes_per_sec": 0 00:12:32.352 }, 00:12:32.352 "claimed": true, 00:12:32.352 "claim_type": "exclusive_write", 00:12:32.352 "zoned": false, 00:12:32.352 "supported_io_types": { 00:12:32.352 "read": true, 00:12:32.352 "write": true, 00:12:32.352 "unmap": true, 00:12:32.352 "flush": true, 00:12:32.352 "reset": true, 00:12:32.352 "nvme_admin": false, 00:12:32.352 "nvme_io": false, 00:12:32.352 "nvme_io_md": false, 00:12:32.352 "write_zeroes": true, 00:12:32.352 "zcopy": true, 00:12:32.352 "get_zone_info": false, 00:12:32.353 "zone_management": false, 00:12:32.353 "zone_append": false, 00:12:32.353 "compare": false, 00:12:32.353 "compare_and_write": false, 00:12:32.353 "abort": true, 00:12:32.353 "seek_hole": false, 00:12:32.353 "seek_data": false, 00:12:32.353 "copy": true, 00:12:32.353 "nvme_iov_md": false 00:12:32.353 }, 00:12:32.353 "memory_domains": [ 00:12:32.353 { 00:12:32.353 "dma_device_id": "system", 00:12:32.353 "dma_device_type": 1 00:12:32.353 }, 00:12:32.353 { 00:12:32.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.353 "dma_device_type": 2 00:12:32.353 } 00:12:32.353 ], 00:12:32.353 "driver_specific": {} 00:12:32.353 } 00:12:32.353 ] 00:12:32.353 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.353 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:32.353 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:32.353 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.353 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.353 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:32.353 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.353 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.353 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.353 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.353 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.353 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.353 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.353 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.353 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.353 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.353 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.353 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.353 "name": "Existed_Raid", 00:12:32.353 "uuid": "6942968e-7285-4c12-bb04-6a9c0a33cabd", 00:12:32.353 "strip_size_kb": 64, 00:12:32.353 "state": "configuring", 00:12:32.353 "raid_level": "raid0", 00:12:32.353 "superblock": true, 00:12:32.353 "num_base_bdevs": 4, 00:12:32.353 "num_base_bdevs_discovered": 3, 00:12:32.353 "num_base_bdevs_operational": 4, 00:12:32.353 "base_bdevs_list": [ 00:12:32.353 { 00:12:32.353 "name": "BaseBdev1", 00:12:32.353 "uuid": "9ae39aed-fc74-45c4-8eba-dba6a913f056", 00:12:32.353 "is_configured": true, 00:12:32.353 "data_offset": 2048, 00:12:32.353 "data_size": 63488 00:12:32.353 }, 00:12:32.353 { 00:12:32.353 "name": null, 00:12:32.353 "uuid": "173edcf5-9dc7-4f46-8769-7bc76135313e", 00:12:32.353 "is_configured": false, 00:12:32.353 "data_offset": 0, 00:12:32.353 "data_size": 63488 00:12:32.353 }, 00:12:32.353 { 00:12:32.353 "name": "BaseBdev3", 00:12:32.353 "uuid": "87c25235-fa10-471a-acc0-fb21ab656f01", 00:12:32.353 "is_configured": true, 00:12:32.353 "data_offset": 2048, 00:12:32.353 "data_size": 63488 00:12:32.353 }, 00:12:32.353 { 00:12:32.353 "name": "BaseBdev4", 00:12:32.353 "uuid": "32c73362-094e-49a5-95f4-6fc06d9faf40", 00:12:32.353 "is_configured": true, 00:12:32.353 "data_offset": 2048, 00:12:32.353 "data_size": 63488 00:12:32.353 } 00:12:32.353 ] 00:12:32.353 }' 00:12:32.353 04:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.353 04:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.612 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.612 04:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.612 04:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.612 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:32.612 04:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.872 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:32.872 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:32.872 04:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.872 04:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.872 [2024-11-27 04:29:29.214332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:32.872 04:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.872 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:32.872 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.872 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.872 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:32.872 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.872 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.872 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.872 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.872 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.872 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.872 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.872 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.872 04:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.872 04:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.872 04:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.872 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.872 "name": "Existed_Raid", 00:12:32.872 "uuid": "6942968e-7285-4c12-bb04-6a9c0a33cabd", 00:12:32.872 "strip_size_kb": 64, 00:12:32.872 "state": "configuring", 00:12:32.872 "raid_level": "raid0", 00:12:32.872 "superblock": true, 00:12:32.872 "num_base_bdevs": 4, 00:12:32.872 "num_base_bdevs_discovered": 2, 00:12:32.872 "num_base_bdevs_operational": 4, 00:12:32.872 "base_bdevs_list": [ 00:12:32.872 { 00:12:32.872 "name": "BaseBdev1", 00:12:32.872 "uuid": "9ae39aed-fc74-45c4-8eba-dba6a913f056", 00:12:32.872 "is_configured": true, 00:12:32.872 "data_offset": 2048, 00:12:32.872 "data_size": 63488 00:12:32.872 }, 00:12:32.872 { 00:12:32.872 "name": null, 00:12:32.872 "uuid": "173edcf5-9dc7-4f46-8769-7bc76135313e", 00:12:32.872 "is_configured": false, 00:12:32.872 "data_offset": 0, 00:12:32.872 "data_size": 63488 00:12:32.872 }, 00:12:32.872 { 00:12:32.872 "name": null, 00:12:32.872 "uuid": "87c25235-fa10-471a-acc0-fb21ab656f01", 00:12:32.872 "is_configured": false, 00:12:32.872 "data_offset": 0, 00:12:32.872 "data_size": 63488 00:12:32.872 }, 00:12:32.872 { 00:12:32.872 "name": "BaseBdev4", 00:12:32.872 "uuid": "32c73362-094e-49a5-95f4-6fc06d9faf40", 00:12:32.872 "is_configured": true, 00:12:32.872 "data_offset": 2048, 00:12:32.872 "data_size": 63488 00:12:32.872 } 00:12:32.872 ] 00:12:32.872 }' 00:12:32.872 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.872 04:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.440 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.440 04:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.440 04:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.440 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:33.440 04:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.440 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:33.440 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:33.440 04:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.440 04:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.440 [2024-11-27 04:29:29.773362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:33.440 04:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.440 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:33.440 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.440 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.440 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:33.440 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.440 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.440 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.440 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.440 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.440 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.440 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.440 04:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.440 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.440 04:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.440 04:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.440 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.440 "name": "Existed_Raid", 00:12:33.440 "uuid": "6942968e-7285-4c12-bb04-6a9c0a33cabd", 00:12:33.440 "strip_size_kb": 64, 00:12:33.440 "state": "configuring", 00:12:33.440 "raid_level": "raid0", 00:12:33.440 "superblock": true, 00:12:33.440 "num_base_bdevs": 4, 00:12:33.440 "num_base_bdevs_discovered": 3, 00:12:33.440 "num_base_bdevs_operational": 4, 00:12:33.440 "base_bdevs_list": [ 00:12:33.440 { 00:12:33.440 "name": "BaseBdev1", 00:12:33.440 "uuid": "9ae39aed-fc74-45c4-8eba-dba6a913f056", 00:12:33.440 "is_configured": true, 00:12:33.440 "data_offset": 2048, 00:12:33.440 "data_size": 63488 00:12:33.440 }, 00:12:33.440 { 00:12:33.440 "name": null, 00:12:33.440 "uuid": "173edcf5-9dc7-4f46-8769-7bc76135313e", 00:12:33.440 "is_configured": false, 00:12:33.440 "data_offset": 0, 00:12:33.440 "data_size": 63488 00:12:33.440 }, 00:12:33.440 { 00:12:33.440 "name": "BaseBdev3", 00:12:33.440 "uuid": "87c25235-fa10-471a-acc0-fb21ab656f01", 00:12:33.440 "is_configured": true, 00:12:33.440 "data_offset": 2048, 00:12:33.440 "data_size": 63488 00:12:33.440 }, 00:12:33.440 { 00:12:33.440 "name": "BaseBdev4", 00:12:33.440 "uuid": "32c73362-094e-49a5-95f4-6fc06d9faf40", 00:12:33.440 "is_configured": true, 00:12:33.441 "data_offset": 2048, 00:12:33.441 "data_size": 63488 00:12:33.441 } 00:12:33.441 ] 00:12:33.441 }' 00:12:33.441 04:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.441 04:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.700 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.700 04:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.700 04:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.700 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:33.700 04:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.960 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:33.960 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:33.960 04:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.960 04:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.960 [2024-11-27 04:29:30.288575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:33.960 04:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.960 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:33.960 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.960 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.960 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:33.960 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.960 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.960 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.960 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.960 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.960 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.960 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.960 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.960 04:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.960 04:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.960 04:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.960 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.960 "name": "Existed_Raid", 00:12:33.960 "uuid": "6942968e-7285-4c12-bb04-6a9c0a33cabd", 00:12:33.960 "strip_size_kb": 64, 00:12:33.960 "state": "configuring", 00:12:33.960 "raid_level": "raid0", 00:12:33.960 "superblock": true, 00:12:33.960 "num_base_bdevs": 4, 00:12:33.960 "num_base_bdevs_discovered": 2, 00:12:33.960 "num_base_bdevs_operational": 4, 00:12:33.960 "base_bdevs_list": [ 00:12:33.960 { 00:12:33.960 "name": null, 00:12:33.960 "uuid": "9ae39aed-fc74-45c4-8eba-dba6a913f056", 00:12:33.960 "is_configured": false, 00:12:33.960 "data_offset": 0, 00:12:33.960 "data_size": 63488 00:12:33.960 }, 00:12:33.960 { 00:12:33.960 "name": null, 00:12:33.960 "uuid": "173edcf5-9dc7-4f46-8769-7bc76135313e", 00:12:33.960 "is_configured": false, 00:12:33.960 "data_offset": 0, 00:12:33.960 "data_size": 63488 00:12:33.960 }, 00:12:33.960 { 00:12:33.960 "name": "BaseBdev3", 00:12:33.960 "uuid": "87c25235-fa10-471a-acc0-fb21ab656f01", 00:12:33.960 "is_configured": true, 00:12:33.960 "data_offset": 2048, 00:12:33.960 "data_size": 63488 00:12:33.960 }, 00:12:33.960 { 00:12:33.960 "name": "BaseBdev4", 00:12:33.960 "uuid": "32c73362-094e-49a5-95f4-6fc06d9faf40", 00:12:33.960 "is_configured": true, 00:12:33.960 "data_offset": 2048, 00:12:33.960 "data_size": 63488 00:12:33.960 } 00:12:33.960 ] 00:12:33.960 }' 00:12:33.960 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.960 04:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.527 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.527 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:34.527 04:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.527 04:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.527 04:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.527 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:34.527 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:34.527 04:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.527 04:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.527 [2024-11-27 04:29:30.891623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:34.527 04:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.527 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:34.527 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.527 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.527 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:34.527 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.527 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.527 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.527 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.527 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.527 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.527 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.527 04:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.527 04:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.527 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.527 04:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.527 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.527 "name": "Existed_Raid", 00:12:34.527 "uuid": "6942968e-7285-4c12-bb04-6a9c0a33cabd", 00:12:34.527 "strip_size_kb": 64, 00:12:34.527 "state": "configuring", 00:12:34.527 "raid_level": "raid0", 00:12:34.527 "superblock": true, 00:12:34.527 "num_base_bdevs": 4, 00:12:34.527 "num_base_bdevs_discovered": 3, 00:12:34.527 "num_base_bdevs_operational": 4, 00:12:34.527 "base_bdevs_list": [ 00:12:34.527 { 00:12:34.527 "name": null, 00:12:34.527 "uuid": "9ae39aed-fc74-45c4-8eba-dba6a913f056", 00:12:34.527 "is_configured": false, 00:12:34.527 "data_offset": 0, 00:12:34.527 "data_size": 63488 00:12:34.527 }, 00:12:34.527 { 00:12:34.527 "name": "BaseBdev2", 00:12:34.527 "uuid": "173edcf5-9dc7-4f46-8769-7bc76135313e", 00:12:34.527 "is_configured": true, 00:12:34.527 "data_offset": 2048, 00:12:34.527 "data_size": 63488 00:12:34.527 }, 00:12:34.528 { 00:12:34.528 "name": "BaseBdev3", 00:12:34.528 "uuid": "87c25235-fa10-471a-acc0-fb21ab656f01", 00:12:34.528 "is_configured": true, 00:12:34.528 "data_offset": 2048, 00:12:34.528 "data_size": 63488 00:12:34.528 }, 00:12:34.528 { 00:12:34.528 "name": "BaseBdev4", 00:12:34.528 "uuid": "32c73362-094e-49a5-95f4-6fc06d9faf40", 00:12:34.528 "is_configured": true, 00:12:34.528 "data_offset": 2048, 00:12:34.528 "data_size": 63488 00:12:34.528 } 00:12:34.528 ] 00:12:34.528 }' 00:12:34.528 04:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.528 04:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9ae39aed-fc74-45c4-8eba-dba6a913f056 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.095 [2024-11-27 04:29:31.501302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:35.095 [2024-11-27 04:29:31.501687] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:35.095 [2024-11-27 04:29:31.501707] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:35.095 [2024-11-27 04:29:31.502010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:35.095 [2024-11-27 04:29:31.502196] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:35.095 [2024-11-27 04:29:31.502223] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:35.095 NewBaseBdev 00:12:35.095 [2024-11-27 04:29:31.502397] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.095 [ 00:12:35.095 { 00:12:35.095 "name": "NewBaseBdev", 00:12:35.095 "aliases": [ 00:12:35.095 "9ae39aed-fc74-45c4-8eba-dba6a913f056" 00:12:35.095 ], 00:12:35.095 "product_name": "Malloc disk", 00:12:35.095 "block_size": 512, 00:12:35.095 "num_blocks": 65536, 00:12:35.095 "uuid": "9ae39aed-fc74-45c4-8eba-dba6a913f056", 00:12:35.095 "assigned_rate_limits": { 00:12:35.095 "rw_ios_per_sec": 0, 00:12:35.095 "rw_mbytes_per_sec": 0, 00:12:35.095 "r_mbytes_per_sec": 0, 00:12:35.095 "w_mbytes_per_sec": 0 00:12:35.095 }, 00:12:35.095 "claimed": true, 00:12:35.095 "claim_type": "exclusive_write", 00:12:35.095 "zoned": false, 00:12:35.095 "supported_io_types": { 00:12:35.095 "read": true, 00:12:35.095 "write": true, 00:12:35.095 "unmap": true, 00:12:35.095 "flush": true, 00:12:35.095 "reset": true, 00:12:35.095 "nvme_admin": false, 00:12:35.095 "nvme_io": false, 00:12:35.095 "nvme_io_md": false, 00:12:35.095 "write_zeroes": true, 00:12:35.095 "zcopy": true, 00:12:35.095 "get_zone_info": false, 00:12:35.095 "zone_management": false, 00:12:35.095 "zone_append": false, 00:12:35.095 "compare": false, 00:12:35.095 "compare_and_write": false, 00:12:35.095 "abort": true, 00:12:35.095 "seek_hole": false, 00:12:35.095 "seek_data": false, 00:12:35.095 "copy": true, 00:12:35.095 "nvme_iov_md": false 00:12:35.095 }, 00:12:35.095 "memory_domains": [ 00:12:35.095 { 00:12:35.095 "dma_device_id": "system", 00:12:35.095 "dma_device_type": 1 00:12:35.095 }, 00:12:35.095 { 00:12:35.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.095 "dma_device_type": 2 00:12:35.095 } 00:12:35.095 ], 00:12:35.095 "driver_specific": {} 00:12:35.095 } 00:12:35.095 ] 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.095 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.096 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.096 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.096 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.096 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.096 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.096 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.096 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.096 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.096 "name": "Existed_Raid", 00:12:35.096 "uuid": "6942968e-7285-4c12-bb04-6a9c0a33cabd", 00:12:35.096 "strip_size_kb": 64, 00:12:35.096 "state": "online", 00:12:35.096 "raid_level": "raid0", 00:12:35.096 "superblock": true, 00:12:35.096 "num_base_bdevs": 4, 00:12:35.096 "num_base_bdevs_discovered": 4, 00:12:35.096 "num_base_bdevs_operational": 4, 00:12:35.096 "base_bdevs_list": [ 00:12:35.096 { 00:12:35.096 "name": "NewBaseBdev", 00:12:35.096 "uuid": "9ae39aed-fc74-45c4-8eba-dba6a913f056", 00:12:35.096 "is_configured": true, 00:12:35.096 "data_offset": 2048, 00:12:35.096 "data_size": 63488 00:12:35.096 }, 00:12:35.096 { 00:12:35.096 "name": "BaseBdev2", 00:12:35.096 "uuid": "173edcf5-9dc7-4f46-8769-7bc76135313e", 00:12:35.096 "is_configured": true, 00:12:35.096 "data_offset": 2048, 00:12:35.096 "data_size": 63488 00:12:35.096 }, 00:12:35.096 { 00:12:35.096 "name": "BaseBdev3", 00:12:35.096 "uuid": "87c25235-fa10-471a-acc0-fb21ab656f01", 00:12:35.096 "is_configured": true, 00:12:35.096 "data_offset": 2048, 00:12:35.096 "data_size": 63488 00:12:35.096 }, 00:12:35.096 { 00:12:35.096 "name": "BaseBdev4", 00:12:35.096 "uuid": "32c73362-094e-49a5-95f4-6fc06d9faf40", 00:12:35.096 "is_configured": true, 00:12:35.096 "data_offset": 2048, 00:12:35.096 "data_size": 63488 00:12:35.096 } 00:12:35.096 ] 00:12:35.096 }' 00:12:35.096 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.096 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.665 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:35.665 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:35.665 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:35.665 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:35.665 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:35.665 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:35.665 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:35.665 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:35.665 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.665 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.665 [2024-11-27 04:29:31.957029] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:35.665 04:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.665 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:35.665 "name": "Existed_Raid", 00:12:35.665 "aliases": [ 00:12:35.665 "6942968e-7285-4c12-bb04-6a9c0a33cabd" 00:12:35.665 ], 00:12:35.665 "product_name": "Raid Volume", 00:12:35.665 "block_size": 512, 00:12:35.665 "num_blocks": 253952, 00:12:35.665 "uuid": "6942968e-7285-4c12-bb04-6a9c0a33cabd", 00:12:35.665 "assigned_rate_limits": { 00:12:35.665 "rw_ios_per_sec": 0, 00:12:35.665 "rw_mbytes_per_sec": 0, 00:12:35.665 "r_mbytes_per_sec": 0, 00:12:35.665 "w_mbytes_per_sec": 0 00:12:35.665 }, 00:12:35.665 "claimed": false, 00:12:35.665 "zoned": false, 00:12:35.665 "supported_io_types": { 00:12:35.665 "read": true, 00:12:35.665 "write": true, 00:12:35.665 "unmap": true, 00:12:35.665 "flush": true, 00:12:35.665 "reset": true, 00:12:35.665 "nvme_admin": false, 00:12:35.665 "nvme_io": false, 00:12:35.665 "nvme_io_md": false, 00:12:35.665 "write_zeroes": true, 00:12:35.665 "zcopy": false, 00:12:35.665 "get_zone_info": false, 00:12:35.665 "zone_management": false, 00:12:35.665 "zone_append": false, 00:12:35.665 "compare": false, 00:12:35.665 "compare_and_write": false, 00:12:35.665 "abort": false, 00:12:35.665 "seek_hole": false, 00:12:35.665 "seek_data": false, 00:12:35.665 "copy": false, 00:12:35.665 "nvme_iov_md": false 00:12:35.665 }, 00:12:35.665 "memory_domains": [ 00:12:35.665 { 00:12:35.665 "dma_device_id": "system", 00:12:35.665 "dma_device_type": 1 00:12:35.665 }, 00:12:35.665 { 00:12:35.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.665 "dma_device_type": 2 00:12:35.665 }, 00:12:35.665 { 00:12:35.665 "dma_device_id": "system", 00:12:35.665 "dma_device_type": 1 00:12:35.665 }, 00:12:35.665 { 00:12:35.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.665 "dma_device_type": 2 00:12:35.665 }, 00:12:35.665 { 00:12:35.665 "dma_device_id": "system", 00:12:35.665 "dma_device_type": 1 00:12:35.665 }, 00:12:35.665 { 00:12:35.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.665 "dma_device_type": 2 00:12:35.665 }, 00:12:35.665 { 00:12:35.666 "dma_device_id": "system", 00:12:35.666 "dma_device_type": 1 00:12:35.666 }, 00:12:35.666 { 00:12:35.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.666 "dma_device_type": 2 00:12:35.666 } 00:12:35.666 ], 00:12:35.666 "driver_specific": { 00:12:35.666 "raid": { 00:12:35.666 "uuid": "6942968e-7285-4c12-bb04-6a9c0a33cabd", 00:12:35.666 "strip_size_kb": 64, 00:12:35.666 "state": "online", 00:12:35.666 "raid_level": "raid0", 00:12:35.666 "superblock": true, 00:12:35.666 "num_base_bdevs": 4, 00:12:35.666 "num_base_bdevs_discovered": 4, 00:12:35.666 "num_base_bdevs_operational": 4, 00:12:35.666 "base_bdevs_list": [ 00:12:35.666 { 00:12:35.666 "name": "NewBaseBdev", 00:12:35.666 "uuid": "9ae39aed-fc74-45c4-8eba-dba6a913f056", 00:12:35.666 "is_configured": true, 00:12:35.666 "data_offset": 2048, 00:12:35.666 "data_size": 63488 00:12:35.666 }, 00:12:35.666 { 00:12:35.666 "name": "BaseBdev2", 00:12:35.666 "uuid": "173edcf5-9dc7-4f46-8769-7bc76135313e", 00:12:35.666 "is_configured": true, 00:12:35.666 "data_offset": 2048, 00:12:35.666 "data_size": 63488 00:12:35.666 }, 00:12:35.666 { 00:12:35.666 "name": "BaseBdev3", 00:12:35.666 "uuid": "87c25235-fa10-471a-acc0-fb21ab656f01", 00:12:35.666 "is_configured": true, 00:12:35.666 "data_offset": 2048, 00:12:35.666 "data_size": 63488 00:12:35.666 }, 00:12:35.666 { 00:12:35.666 "name": "BaseBdev4", 00:12:35.666 "uuid": "32c73362-094e-49a5-95f4-6fc06d9faf40", 00:12:35.666 "is_configured": true, 00:12:35.666 "data_offset": 2048, 00:12:35.666 "data_size": 63488 00:12:35.666 } 00:12:35.666 ] 00:12:35.666 } 00:12:35.666 } 00:12:35.666 }' 00:12:35.666 04:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:35.666 BaseBdev2 00:12:35.666 BaseBdev3 00:12:35.666 BaseBdev4' 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.666 04:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.931 04:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.931 04:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:35.931 04:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:35.931 04:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:35.931 04:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.931 04:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.931 [2024-11-27 04:29:32.292136] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:35.931 [2024-11-27 04:29:32.292178] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:35.931 [2024-11-27 04:29:32.292284] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:35.931 [2024-11-27 04:29:32.292365] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:35.931 [2024-11-27 04:29:32.292378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:35.931 04:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.931 04:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70319 00:12:35.931 04:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70319 ']' 00:12:35.931 04:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70319 00:12:35.931 04:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:35.931 04:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:35.931 04:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70319 00:12:35.931 killing process with pid 70319 00:12:35.931 04:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:35.931 04:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:35.931 04:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70319' 00:12:35.931 04:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70319 00:12:35.931 [2024-11-27 04:29:32.342408] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:35.931 04:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70319 00:12:36.517 [2024-11-27 04:29:32.812171] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:37.894 04:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:37.894 00:12:37.894 real 0m12.203s 00:12:37.894 user 0m19.298s 00:12:37.894 sys 0m2.020s 00:12:37.894 04:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.894 04:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.894 ************************************ 00:12:37.894 END TEST raid_state_function_test_sb 00:12:37.894 ************************************ 00:12:37.894 04:29:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:12:37.894 04:29:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:37.894 04:29:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.894 04:29:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:37.894 ************************************ 00:12:37.894 START TEST raid_superblock_test 00:12:37.894 ************************************ 00:12:37.894 04:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:12:37.894 04:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:37.894 04:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:37.894 04:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:37.894 04:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:37.894 04:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:37.894 04:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:37.894 04:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:37.894 04:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:37.894 04:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:37.894 04:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:37.894 04:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:37.894 04:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:37.894 04:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:37.894 04:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:37.894 04:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:37.894 04:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:37.894 04:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70998 00:12:37.894 04:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:37.894 04:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70998 00:12:37.894 04:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70998 ']' 00:12:37.894 04:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.894 04:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:37.894 04:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.894 04:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:37.894 04:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.895 [2024-11-27 04:29:34.271925] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:12:37.895 [2024-11-27 04:29:34.272177] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70998 ] 00:12:37.895 [2024-11-27 04:29:34.448591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.153 [2024-11-27 04:29:34.573287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.413 [2024-11-27 04:29:34.788070] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.413 [2024-11-27 04:29:34.788184] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.671 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.671 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:38.671 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:38.671 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:38.671 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:38.671 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:38.671 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:38.671 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:38.671 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:38.671 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:38.671 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:38.672 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.672 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.672 malloc1 00:12:38.672 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.672 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:38.672 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.672 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.672 [2024-11-27 04:29:35.220210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:38.672 [2024-11-27 04:29:35.220298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.672 [2024-11-27 04:29:35.220339] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:38.672 [2024-11-27 04:29:35.220350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.672 [2024-11-27 04:29:35.222914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.672 [2024-11-27 04:29:35.222953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:38.672 pt1 00:12:38.672 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.672 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:38.672 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:38.672 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:38.672 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:38.672 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:38.672 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:38.672 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:38.672 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:38.672 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:38.672 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.672 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.932 malloc2 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.932 [2024-11-27 04:29:35.285496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:38.932 [2024-11-27 04:29:35.285649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.932 [2024-11-27 04:29:35.285696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:38.932 [2024-11-27 04:29:35.285725] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.932 [2024-11-27 04:29:35.288248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.932 [2024-11-27 04:29:35.288335] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:38.932 pt2 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.932 malloc3 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.932 [2024-11-27 04:29:35.368957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:38.932 [2024-11-27 04:29:35.369150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.932 [2024-11-27 04:29:35.369202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:38.932 [2024-11-27 04:29:35.369244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.932 [2024-11-27 04:29:35.372133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.932 [2024-11-27 04:29:35.372220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:38.932 pt3 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.932 malloc4 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.932 [2024-11-27 04:29:35.445722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:38.932 [2024-11-27 04:29:35.445818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.932 [2024-11-27 04:29:35.445847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:38.932 [2024-11-27 04:29:35.445858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.932 [2024-11-27 04:29:35.448765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.932 [2024-11-27 04:29:35.448817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:38.932 pt4 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.932 [2024-11-27 04:29:35.457749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:38.932 [2024-11-27 04:29:35.460160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:38.932 [2024-11-27 04:29:35.460271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:38.932 [2024-11-27 04:29:35.460332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:38.932 [2024-11-27 04:29:35.460565] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:38.932 [2024-11-27 04:29:35.460588] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:38.932 [2024-11-27 04:29:35.460944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:38.932 [2024-11-27 04:29:35.461207] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:38.932 [2024-11-27 04:29:35.461225] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:38.932 [2024-11-27 04:29:35.461472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.932 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.932 "name": "raid_bdev1", 00:12:38.932 "uuid": "aa124545-a090-4d65-b837-938ff3496c1c", 00:12:38.932 "strip_size_kb": 64, 00:12:38.932 "state": "online", 00:12:38.932 "raid_level": "raid0", 00:12:38.932 "superblock": true, 00:12:38.932 "num_base_bdevs": 4, 00:12:38.932 "num_base_bdevs_discovered": 4, 00:12:38.932 "num_base_bdevs_operational": 4, 00:12:38.932 "base_bdevs_list": [ 00:12:38.932 { 00:12:38.932 "name": "pt1", 00:12:38.932 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:38.932 "is_configured": true, 00:12:38.932 "data_offset": 2048, 00:12:38.932 "data_size": 63488 00:12:38.932 }, 00:12:38.932 { 00:12:38.932 "name": "pt2", 00:12:38.932 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:38.932 "is_configured": true, 00:12:38.932 "data_offset": 2048, 00:12:38.932 "data_size": 63488 00:12:38.932 }, 00:12:38.932 { 00:12:38.932 "name": "pt3", 00:12:38.932 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:38.932 "is_configured": true, 00:12:38.932 "data_offset": 2048, 00:12:38.932 "data_size": 63488 00:12:38.932 }, 00:12:38.932 { 00:12:38.933 "name": "pt4", 00:12:38.933 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:38.933 "is_configured": true, 00:12:38.933 "data_offset": 2048, 00:12:38.933 "data_size": 63488 00:12:38.933 } 00:12:38.933 ] 00:12:38.933 }' 00:12:38.933 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.933 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.500 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:39.500 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:39.500 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:39.500 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:39.500 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:39.500 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:39.500 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:39.500 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:39.500 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.500 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.500 [2024-11-27 04:29:35.925356] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:39.500 04:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.500 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:39.500 "name": "raid_bdev1", 00:12:39.500 "aliases": [ 00:12:39.500 "aa124545-a090-4d65-b837-938ff3496c1c" 00:12:39.500 ], 00:12:39.500 "product_name": "Raid Volume", 00:12:39.500 "block_size": 512, 00:12:39.500 "num_blocks": 253952, 00:12:39.500 "uuid": "aa124545-a090-4d65-b837-938ff3496c1c", 00:12:39.500 "assigned_rate_limits": { 00:12:39.500 "rw_ios_per_sec": 0, 00:12:39.500 "rw_mbytes_per_sec": 0, 00:12:39.500 "r_mbytes_per_sec": 0, 00:12:39.500 "w_mbytes_per_sec": 0 00:12:39.500 }, 00:12:39.500 "claimed": false, 00:12:39.500 "zoned": false, 00:12:39.500 "supported_io_types": { 00:12:39.500 "read": true, 00:12:39.500 "write": true, 00:12:39.500 "unmap": true, 00:12:39.500 "flush": true, 00:12:39.500 "reset": true, 00:12:39.500 "nvme_admin": false, 00:12:39.500 "nvme_io": false, 00:12:39.500 "nvme_io_md": false, 00:12:39.500 "write_zeroes": true, 00:12:39.500 "zcopy": false, 00:12:39.500 "get_zone_info": false, 00:12:39.500 "zone_management": false, 00:12:39.500 "zone_append": false, 00:12:39.500 "compare": false, 00:12:39.500 "compare_and_write": false, 00:12:39.500 "abort": false, 00:12:39.500 "seek_hole": false, 00:12:39.500 "seek_data": false, 00:12:39.500 "copy": false, 00:12:39.500 "nvme_iov_md": false 00:12:39.500 }, 00:12:39.500 "memory_domains": [ 00:12:39.500 { 00:12:39.500 "dma_device_id": "system", 00:12:39.500 "dma_device_type": 1 00:12:39.500 }, 00:12:39.500 { 00:12:39.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.500 "dma_device_type": 2 00:12:39.500 }, 00:12:39.500 { 00:12:39.500 "dma_device_id": "system", 00:12:39.500 "dma_device_type": 1 00:12:39.500 }, 00:12:39.500 { 00:12:39.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.500 "dma_device_type": 2 00:12:39.500 }, 00:12:39.500 { 00:12:39.500 "dma_device_id": "system", 00:12:39.500 "dma_device_type": 1 00:12:39.500 }, 00:12:39.500 { 00:12:39.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.500 "dma_device_type": 2 00:12:39.500 }, 00:12:39.500 { 00:12:39.500 "dma_device_id": "system", 00:12:39.500 "dma_device_type": 1 00:12:39.500 }, 00:12:39.500 { 00:12:39.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.500 "dma_device_type": 2 00:12:39.500 } 00:12:39.500 ], 00:12:39.500 "driver_specific": { 00:12:39.500 "raid": { 00:12:39.500 "uuid": "aa124545-a090-4d65-b837-938ff3496c1c", 00:12:39.500 "strip_size_kb": 64, 00:12:39.500 "state": "online", 00:12:39.500 "raid_level": "raid0", 00:12:39.501 "superblock": true, 00:12:39.501 "num_base_bdevs": 4, 00:12:39.501 "num_base_bdevs_discovered": 4, 00:12:39.501 "num_base_bdevs_operational": 4, 00:12:39.501 "base_bdevs_list": [ 00:12:39.501 { 00:12:39.501 "name": "pt1", 00:12:39.501 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:39.501 "is_configured": true, 00:12:39.501 "data_offset": 2048, 00:12:39.501 "data_size": 63488 00:12:39.501 }, 00:12:39.501 { 00:12:39.501 "name": "pt2", 00:12:39.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:39.501 "is_configured": true, 00:12:39.501 "data_offset": 2048, 00:12:39.501 "data_size": 63488 00:12:39.501 }, 00:12:39.501 { 00:12:39.501 "name": "pt3", 00:12:39.501 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:39.501 "is_configured": true, 00:12:39.501 "data_offset": 2048, 00:12:39.501 "data_size": 63488 00:12:39.501 }, 00:12:39.501 { 00:12:39.501 "name": "pt4", 00:12:39.501 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:39.501 "is_configured": true, 00:12:39.501 "data_offset": 2048, 00:12:39.501 "data_size": 63488 00:12:39.501 } 00:12:39.501 ] 00:12:39.501 } 00:12:39.501 } 00:12:39.501 }' 00:12:39.501 04:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:39.501 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:39.501 pt2 00:12:39.501 pt3 00:12:39.501 pt4' 00:12:39.501 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.501 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:39.501 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:39.501 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:39.501 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.501 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.501 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.501 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.760 [2024-11-27 04:29:36.256803] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=aa124545-a090-4d65-b837-938ff3496c1c 00:12:39.760 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z aa124545-a090-4d65-b837-938ff3496c1c ']' 00:12:39.761 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:39.761 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.761 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.761 [2024-11-27 04:29:36.300346] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:39.761 [2024-11-27 04:29:36.300477] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:39.761 [2024-11-27 04:29:36.300637] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:39.761 [2024-11-27 04:29:36.300756] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:39.761 [2024-11-27 04:29:36.300814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:39.761 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.761 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.761 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:39.761 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.761 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.761 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.021 [2024-11-27 04:29:36.464256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:40.021 [2024-11-27 04:29:36.466972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:40.021 [2024-11-27 04:29:36.467108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:40.021 [2024-11-27 04:29:36.467196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:40.021 [2024-11-27 04:29:36.467293] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:40.021 [2024-11-27 04:29:36.467435] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:40.021 [2024-11-27 04:29:36.467508] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:40.021 [2024-11-27 04:29:36.467583] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:40.021 [2024-11-27 04:29:36.467645] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:40.021 [2024-11-27 04:29:36.467687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:40.021 request: 00:12:40.021 { 00:12:40.021 "name": "raid_bdev1", 00:12:40.021 "raid_level": "raid0", 00:12:40.021 "base_bdevs": [ 00:12:40.021 "malloc1", 00:12:40.021 "malloc2", 00:12:40.021 "malloc3", 00:12:40.021 "malloc4" 00:12:40.021 ], 00:12:40.021 "strip_size_kb": 64, 00:12:40.021 "superblock": false, 00:12:40.021 "method": "bdev_raid_create", 00:12:40.021 "req_id": 1 00:12:40.021 } 00:12:40.021 Got JSON-RPC error response 00:12:40.021 response: 00:12:40.021 { 00:12:40.021 "code": -17, 00:12:40.021 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:40.021 } 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.021 [2024-11-27 04:29:36.528312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:40.021 [2024-11-27 04:29:36.528423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.021 [2024-11-27 04:29:36.528451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:40.021 [2024-11-27 04:29:36.528465] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.021 [2024-11-27 04:29:36.531150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.021 [2024-11-27 04:29:36.531199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:40.021 [2024-11-27 04:29:36.531317] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:40.021 [2024-11-27 04:29:36.531392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:40.021 pt1 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.021 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.021 "name": "raid_bdev1", 00:12:40.021 "uuid": "aa124545-a090-4d65-b837-938ff3496c1c", 00:12:40.021 "strip_size_kb": 64, 00:12:40.021 "state": "configuring", 00:12:40.021 "raid_level": "raid0", 00:12:40.021 "superblock": true, 00:12:40.021 "num_base_bdevs": 4, 00:12:40.021 "num_base_bdevs_discovered": 1, 00:12:40.021 "num_base_bdevs_operational": 4, 00:12:40.021 "base_bdevs_list": [ 00:12:40.021 { 00:12:40.022 "name": "pt1", 00:12:40.022 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:40.022 "is_configured": true, 00:12:40.022 "data_offset": 2048, 00:12:40.022 "data_size": 63488 00:12:40.022 }, 00:12:40.022 { 00:12:40.022 "name": null, 00:12:40.022 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:40.022 "is_configured": false, 00:12:40.022 "data_offset": 2048, 00:12:40.022 "data_size": 63488 00:12:40.022 }, 00:12:40.022 { 00:12:40.022 "name": null, 00:12:40.022 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:40.022 "is_configured": false, 00:12:40.022 "data_offset": 2048, 00:12:40.022 "data_size": 63488 00:12:40.022 }, 00:12:40.022 { 00:12:40.022 "name": null, 00:12:40.022 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:40.022 "is_configured": false, 00:12:40.022 "data_offset": 2048, 00:12:40.022 "data_size": 63488 00:12:40.022 } 00:12:40.022 ] 00:12:40.022 }' 00:12:40.022 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.022 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.671 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:40.671 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:40.671 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.672 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.672 [2024-11-27 04:29:36.979561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:40.672 [2024-11-27 04:29:36.979784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.672 [2024-11-27 04:29:36.979840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:40.672 [2024-11-27 04:29:36.979891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.672 [2024-11-27 04:29:36.980543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.672 [2024-11-27 04:29:36.980620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:40.672 [2024-11-27 04:29:36.980781] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:40.672 [2024-11-27 04:29:36.980842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:40.672 pt2 00:12:40.672 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.672 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:40.672 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.672 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.672 [2024-11-27 04:29:36.991565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:40.672 04:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.672 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:40.672 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.672 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.672 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:40.672 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.672 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.672 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.672 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.672 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.672 04:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.672 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.672 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.672 04:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.672 04:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.672 04:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.672 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.672 "name": "raid_bdev1", 00:12:40.672 "uuid": "aa124545-a090-4d65-b837-938ff3496c1c", 00:12:40.672 "strip_size_kb": 64, 00:12:40.672 "state": "configuring", 00:12:40.672 "raid_level": "raid0", 00:12:40.672 "superblock": true, 00:12:40.672 "num_base_bdevs": 4, 00:12:40.672 "num_base_bdevs_discovered": 1, 00:12:40.672 "num_base_bdevs_operational": 4, 00:12:40.672 "base_bdevs_list": [ 00:12:40.672 { 00:12:40.672 "name": "pt1", 00:12:40.672 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:40.672 "is_configured": true, 00:12:40.672 "data_offset": 2048, 00:12:40.672 "data_size": 63488 00:12:40.672 }, 00:12:40.672 { 00:12:40.672 "name": null, 00:12:40.672 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:40.672 "is_configured": false, 00:12:40.672 "data_offset": 0, 00:12:40.672 "data_size": 63488 00:12:40.672 }, 00:12:40.672 { 00:12:40.672 "name": null, 00:12:40.672 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:40.672 "is_configured": false, 00:12:40.672 "data_offset": 2048, 00:12:40.672 "data_size": 63488 00:12:40.672 }, 00:12:40.672 { 00:12:40.672 "name": null, 00:12:40.672 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:40.672 "is_configured": false, 00:12:40.672 "data_offset": 2048, 00:12:40.672 "data_size": 63488 00:12:40.672 } 00:12:40.672 ] 00:12:40.672 }' 00:12:40.672 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.672 04:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.933 [2024-11-27 04:29:37.414771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:40.933 [2024-11-27 04:29:37.414889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.933 [2024-11-27 04:29:37.414914] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:40.933 [2024-11-27 04:29:37.414923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.933 [2024-11-27 04:29:37.415513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.933 [2024-11-27 04:29:37.415542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:40.933 [2024-11-27 04:29:37.415659] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:40.933 [2024-11-27 04:29:37.415685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:40.933 pt2 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.933 [2024-11-27 04:29:37.426664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:40.933 [2024-11-27 04:29:37.426719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.933 [2024-11-27 04:29:37.426739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:40.933 [2024-11-27 04:29:37.426748] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.933 [2024-11-27 04:29:37.427187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.933 [2024-11-27 04:29:37.427205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:40.933 [2024-11-27 04:29:37.427278] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:40.933 [2024-11-27 04:29:37.427312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:40.933 pt3 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.933 [2024-11-27 04:29:37.438609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:40.933 [2024-11-27 04:29:37.438659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.933 [2024-11-27 04:29:37.438679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:40.933 [2024-11-27 04:29:37.438687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.933 [2024-11-27 04:29:37.439105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.933 [2024-11-27 04:29:37.439122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:40.933 [2024-11-27 04:29:37.439194] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:40.933 [2024-11-27 04:29:37.439217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:40.933 [2024-11-27 04:29:37.439352] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:40.933 [2024-11-27 04:29:37.439361] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:40.933 [2024-11-27 04:29:37.439629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:40.933 [2024-11-27 04:29:37.439787] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:40.933 [2024-11-27 04:29:37.439801] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:40.933 [2024-11-27 04:29:37.439927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.933 pt4 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.933 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.933 "name": "raid_bdev1", 00:12:40.933 "uuid": "aa124545-a090-4d65-b837-938ff3496c1c", 00:12:40.933 "strip_size_kb": 64, 00:12:40.933 "state": "online", 00:12:40.933 "raid_level": "raid0", 00:12:40.933 "superblock": true, 00:12:40.934 "num_base_bdevs": 4, 00:12:40.934 "num_base_bdevs_discovered": 4, 00:12:40.934 "num_base_bdevs_operational": 4, 00:12:40.934 "base_bdevs_list": [ 00:12:40.934 { 00:12:40.934 "name": "pt1", 00:12:40.934 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:40.934 "is_configured": true, 00:12:40.934 "data_offset": 2048, 00:12:40.934 "data_size": 63488 00:12:40.934 }, 00:12:40.934 { 00:12:40.934 "name": "pt2", 00:12:40.934 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:40.934 "is_configured": true, 00:12:40.934 "data_offset": 2048, 00:12:40.934 "data_size": 63488 00:12:40.934 }, 00:12:40.934 { 00:12:40.934 "name": "pt3", 00:12:40.934 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:40.934 "is_configured": true, 00:12:40.934 "data_offset": 2048, 00:12:40.934 "data_size": 63488 00:12:40.934 }, 00:12:40.934 { 00:12:40.934 "name": "pt4", 00:12:40.934 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:40.934 "is_configured": true, 00:12:40.934 "data_offset": 2048, 00:12:40.934 "data_size": 63488 00:12:40.934 } 00:12:40.934 ] 00:12:40.934 }' 00:12:40.934 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.934 04:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.509 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:41.509 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:41.509 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:41.509 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:41.509 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:41.509 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:41.509 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:41.509 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:41.509 04:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.509 04:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.509 [2024-11-27 04:29:37.946225] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:41.509 04:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.509 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:41.509 "name": "raid_bdev1", 00:12:41.509 "aliases": [ 00:12:41.509 "aa124545-a090-4d65-b837-938ff3496c1c" 00:12:41.509 ], 00:12:41.509 "product_name": "Raid Volume", 00:12:41.509 "block_size": 512, 00:12:41.509 "num_blocks": 253952, 00:12:41.509 "uuid": "aa124545-a090-4d65-b837-938ff3496c1c", 00:12:41.509 "assigned_rate_limits": { 00:12:41.509 "rw_ios_per_sec": 0, 00:12:41.509 "rw_mbytes_per_sec": 0, 00:12:41.509 "r_mbytes_per_sec": 0, 00:12:41.509 "w_mbytes_per_sec": 0 00:12:41.509 }, 00:12:41.509 "claimed": false, 00:12:41.509 "zoned": false, 00:12:41.509 "supported_io_types": { 00:12:41.509 "read": true, 00:12:41.509 "write": true, 00:12:41.509 "unmap": true, 00:12:41.509 "flush": true, 00:12:41.509 "reset": true, 00:12:41.509 "nvme_admin": false, 00:12:41.509 "nvme_io": false, 00:12:41.509 "nvme_io_md": false, 00:12:41.509 "write_zeroes": true, 00:12:41.509 "zcopy": false, 00:12:41.509 "get_zone_info": false, 00:12:41.509 "zone_management": false, 00:12:41.509 "zone_append": false, 00:12:41.509 "compare": false, 00:12:41.509 "compare_and_write": false, 00:12:41.509 "abort": false, 00:12:41.509 "seek_hole": false, 00:12:41.509 "seek_data": false, 00:12:41.509 "copy": false, 00:12:41.509 "nvme_iov_md": false 00:12:41.509 }, 00:12:41.509 "memory_domains": [ 00:12:41.509 { 00:12:41.509 "dma_device_id": "system", 00:12:41.509 "dma_device_type": 1 00:12:41.509 }, 00:12:41.509 { 00:12:41.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.509 "dma_device_type": 2 00:12:41.509 }, 00:12:41.509 { 00:12:41.509 "dma_device_id": "system", 00:12:41.509 "dma_device_type": 1 00:12:41.509 }, 00:12:41.509 { 00:12:41.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.509 "dma_device_type": 2 00:12:41.509 }, 00:12:41.509 { 00:12:41.509 "dma_device_id": "system", 00:12:41.509 "dma_device_type": 1 00:12:41.509 }, 00:12:41.509 { 00:12:41.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.509 "dma_device_type": 2 00:12:41.509 }, 00:12:41.509 { 00:12:41.509 "dma_device_id": "system", 00:12:41.509 "dma_device_type": 1 00:12:41.509 }, 00:12:41.509 { 00:12:41.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.509 "dma_device_type": 2 00:12:41.509 } 00:12:41.509 ], 00:12:41.509 "driver_specific": { 00:12:41.509 "raid": { 00:12:41.509 "uuid": "aa124545-a090-4d65-b837-938ff3496c1c", 00:12:41.509 "strip_size_kb": 64, 00:12:41.509 "state": "online", 00:12:41.509 "raid_level": "raid0", 00:12:41.509 "superblock": true, 00:12:41.509 "num_base_bdevs": 4, 00:12:41.509 "num_base_bdevs_discovered": 4, 00:12:41.509 "num_base_bdevs_operational": 4, 00:12:41.509 "base_bdevs_list": [ 00:12:41.509 { 00:12:41.509 "name": "pt1", 00:12:41.509 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:41.509 "is_configured": true, 00:12:41.509 "data_offset": 2048, 00:12:41.509 "data_size": 63488 00:12:41.509 }, 00:12:41.509 { 00:12:41.509 "name": "pt2", 00:12:41.509 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:41.509 "is_configured": true, 00:12:41.509 "data_offset": 2048, 00:12:41.509 "data_size": 63488 00:12:41.509 }, 00:12:41.509 { 00:12:41.509 "name": "pt3", 00:12:41.509 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:41.509 "is_configured": true, 00:12:41.509 "data_offset": 2048, 00:12:41.509 "data_size": 63488 00:12:41.509 }, 00:12:41.509 { 00:12:41.509 "name": "pt4", 00:12:41.509 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:41.509 "is_configured": true, 00:12:41.509 "data_offset": 2048, 00:12:41.509 "data_size": 63488 00:12:41.509 } 00:12:41.509 ] 00:12:41.509 } 00:12:41.509 } 00:12:41.509 }' 00:12:41.509 04:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:41.509 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:41.509 pt2 00:12:41.509 pt3 00:12:41.509 pt4' 00:12:41.509 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:41.509 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:41.509 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:41.509 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:41.509 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:41.509 04:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.509 04:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.509 04:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:41.768 [2024-11-27 04:29:38.269677] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' aa124545-a090-4d65-b837-938ff3496c1c '!=' aa124545-a090-4d65-b837-938ff3496c1c ']' 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70998 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70998 ']' 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70998 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:41.768 04:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70998 00:12:42.028 killing process with pid 70998 00:12:42.028 04:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:42.028 04:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:42.028 04:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70998' 00:12:42.028 04:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70998 00:12:42.028 [2024-11-27 04:29:38.354629] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:42.028 04:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70998 00:12:42.028 [2024-11-27 04:29:38.354773] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:42.028 [2024-11-27 04:29:38.354868] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:42.028 [2024-11-27 04:29:38.354880] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:42.288 [2024-11-27 04:29:38.853185] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:43.675 04:29:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:43.675 00:12:43.675 real 0m5.967s 00:12:43.675 user 0m8.397s 00:12:43.675 sys 0m1.015s 00:12:43.675 04:29:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.675 04:29:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.675 ************************************ 00:12:43.675 END TEST raid_superblock_test 00:12:43.675 ************************************ 00:12:43.675 04:29:40 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:12:43.675 04:29:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:43.675 04:29:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.675 04:29:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:43.675 ************************************ 00:12:43.675 START TEST raid_read_error_test 00:12:43.675 ************************************ 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:43.675 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:43.676 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:43.676 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:43.676 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.G6DAcijcwu 00:12:43.676 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71265 00:12:43.676 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:43.676 04:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71265 00:12:43.676 04:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71265 ']' 00:12:43.676 04:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.676 04:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:43.676 04:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.676 04:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:43.676 04:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.936 [2024-11-27 04:29:40.314516] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:12:43.936 [2024-11-27 04:29:40.314720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71265 ] 00:12:43.936 [2024-11-27 04:29:40.489358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.196 [2024-11-27 04:29:40.642240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.456 [2024-11-27 04:29:40.890027] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:44.456 [2024-11-27 04:29:40.890081] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:44.717 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.717 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:44.717 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.717 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:44.717 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.717 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.717 BaseBdev1_malloc 00:12:44.717 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.717 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:44.717 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.717 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.717 true 00:12:44.717 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.717 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:44.717 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.717 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.717 [2024-11-27 04:29:41.246788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:44.717 [2024-11-27 04:29:41.246862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.717 [2024-11-27 04:29:41.246883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:44.717 [2024-11-27 04:29:41.246894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.717 [2024-11-27 04:29:41.249464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.717 [2024-11-27 04:29:41.249503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:44.717 BaseBdev1 00:12:44.717 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.717 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.717 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:44.717 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.717 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.978 BaseBdev2_malloc 00:12:44.978 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.978 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:44.978 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.978 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.978 true 00:12:44.978 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.978 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:44.978 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.978 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.978 [2024-11-27 04:29:41.324715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:44.978 [2024-11-27 04:29:41.324786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.978 [2024-11-27 04:29:41.324804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:44.978 [2024-11-27 04:29:41.324816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.978 [2024-11-27 04:29:41.327470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.978 [2024-11-27 04:29:41.327512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:44.978 BaseBdev2 00:12:44.978 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.978 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.978 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:44.978 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.978 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.978 BaseBdev3_malloc 00:12:44.978 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.978 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:44.978 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.978 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.978 true 00:12:44.978 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.978 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:44.978 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.978 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.979 [2024-11-27 04:29:41.408781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:44.979 [2024-11-27 04:29:41.408857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.979 [2024-11-27 04:29:41.408888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:44.979 [2024-11-27 04:29:41.408899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.979 [2024-11-27 04:29:41.411425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.979 [2024-11-27 04:29:41.411467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:44.979 BaseBdev3 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.979 BaseBdev4_malloc 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.979 true 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.979 [2024-11-27 04:29:41.483199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:44.979 [2024-11-27 04:29:41.483269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.979 [2024-11-27 04:29:41.483299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:44.979 [2024-11-27 04:29:41.483311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.979 [2024-11-27 04:29:41.485749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.979 [2024-11-27 04:29:41.485880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:44.979 BaseBdev4 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.979 [2024-11-27 04:29:41.495250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:44.979 [2024-11-27 04:29:41.497584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:44.979 [2024-11-27 04:29:41.497670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:44.979 [2024-11-27 04:29:41.497740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:44.979 [2024-11-27 04:29:41.497986] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:44.979 [2024-11-27 04:29:41.498004] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:44.979 [2024-11-27 04:29:41.498276] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:44.979 [2024-11-27 04:29:41.498463] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:44.979 [2024-11-27 04:29:41.498475] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:44.979 [2024-11-27 04:29:41.498645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.979 "name": "raid_bdev1", 00:12:44.979 "uuid": "195f02c4-2c0f-4a91-83b1-8f4b0ecd584e", 00:12:44.979 "strip_size_kb": 64, 00:12:44.979 "state": "online", 00:12:44.979 "raid_level": "raid0", 00:12:44.979 "superblock": true, 00:12:44.979 "num_base_bdevs": 4, 00:12:44.979 "num_base_bdevs_discovered": 4, 00:12:44.979 "num_base_bdevs_operational": 4, 00:12:44.979 "base_bdevs_list": [ 00:12:44.979 { 00:12:44.979 "name": "BaseBdev1", 00:12:44.979 "uuid": "a4440a4e-e147-514e-941f-a2684cf3f744", 00:12:44.979 "is_configured": true, 00:12:44.979 "data_offset": 2048, 00:12:44.979 "data_size": 63488 00:12:44.979 }, 00:12:44.979 { 00:12:44.979 "name": "BaseBdev2", 00:12:44.979 "uuid": "902c0e4e-70df-533f-9dbd-1bc5c052c699", 00:12:44.979 "is_configured": true, 00:12:44.979 "data_offset": 2048, 00:12:44.979 "data_size": 63488 00:12:44.979 }, 00:12:44.979 { 00:12:44.979 "name": "BaseBdev3", 00:12:44.979 "uuid": "5a20b2c6-eb57-53e2-8a78-80cbf0ff8f1d", 00:12:44.979 "is_configured": true, 00:12:44.979 "data_offset": 2048, 00:12:44.979 "data_size": 63488 00:12:44.979 }, 00:12:44.979 { 00:12:44.979 "name": "BaseBdev4", 00:12:44.979 "uuid": "2f0f385c-cc57-5fd4-b366-ecb21638a108", 00:12:44.979 "is_configured": true, 00:12:44.979 "data_offset": 2048, 00:12:44.979 "data_size": 63488 00:12:44.979 } 00:12:44.979 ] 00:12:44.979 }' 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.979 04:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.628 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:45.628 04:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:45.628 [2024-11-27 04:29:42.084013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:46.569 04:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:46.569 04:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.569 04:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.569 04:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.569 04:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:46.569 04:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:46.569 04:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:46.569 04:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:46.569 04:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.569 04:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.569 04:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:46.569 04:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.569 04:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:46.569 04:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.569 04:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.569 04:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.569 04:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.569 04:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.569 04:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.569 04:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.569 04:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.569 04:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.569 04:29:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.569 "name": "raid_bdev1", 00:12:46.569 "uuid": "195f02c4-2c0f-4a91-83b1-8f4b0ecd584e", 00:12:46.569 "strip_size_kb": 64, 00:12:46.569 "state": "online", 00:12:46.569 "raid_level": "raid0", 00:12:46.569 "superblock": true, 00:12:46.569 "num_base_bdevs": 4, 00:12:46.569 "num_base_bdevs_discovered": 4, 00:12:46.569 "num_base_bdevs_operational": 4, 00:12:46.569 "base_bdevs_list": [ 00:12:46.569 { 00:12:46.569 "name": "BaseBdev1", 00:12:46.569 "uuid": "a4440a4e-e147-514e-941f-a2684cf3f744", 00:12:46.569 "is_configured": true, 00:12:46.569 "data_offset": 2048, 00:12:46.569 "data_size": 63488 00:12:46.569 }, 00:12:46.569 { 00:12:46.569 "name": "BaseBdev2", 00:12:46.569 "uuid": "902c0e4e-70df-533f-9dbd-1bc5c052c699", 00:12:46.569 "is_configured": true, 00:12:46.569 "data_offset": 2048, 00:12:46.569 "data_size": 63488 00:12:46.569 }, 00:12:46.569 { 00:12:46.569 "name": "BaseBdev3", 00:12:46.569 "uuid": "5a20b2c6-eb57-53e2-8a78-80cbf0ff8f1d", 00:12:46.569 "is_configured": true, 00:12:46.569 "data_offset": 2048, 00:12:46.569 "data_size": 63488 00:12:46.569 }, 00:12:46.569 { 00:12:46.569 "name": "BaseBdev4", 00:12:46.569 "uuid": "2f0f385c-cc57-5fd4-b366-ecb21638a108", 00:12:46.569 "is_configured": true, 00:12:46.569 "data_offset": 2048, 00:12:46.569 "data_size": 63488 00:12:46.569 } 00:12:46.569 ] 00:12:46.569 }' 00:12:46.569 04:29:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.569 04:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.137 04:29:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:47.137 04:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.137 04:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.137 [2024-11-27 04:29:43.442123] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:47.137 [2024-11-27 04:29:43.442176] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:47.137 [2024-11-27 04:29:43.445178] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:47.137 [2024-11-27 04:29:43.445247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.137 [2024-11-27 04:29:43.445298] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:47.137 [2024-11-27 04:29:43.445310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:47.137 { 00:12:47.137 "results": [ 00:12:47.137 { 00:12:47.137 "job": "raid_bdev1", 00:12:47.137 "core_mask": "0x1", 00:12:47.137 "workload": "randrw", 00:12:47.137 "percentage": 50, 00:12:47.137 "status": "finished", 00:12:47.137 "queue_depth": 1, 00:12:47.137 "io_size": 131072, 00:12:47.137 "runtime": 1.358178, 00:12:47.137 "iops": 12365.095002275108, 00:12:47.137 "mibps": 1545.6368752843885, 00:12:47.137 "io_failed": 1, 00:12:47.138 "io_timeout": 0, 00:12:47.138 "avg_latency_us": 113.5120083306141, 00:12:47.138 "min_latency_us": 28.50655021834061, 00:12:47.138 "max_latency_us": 1595.4724890829693 00:12:47.138 } 00:12:47.138 ], 00:12:47.138 "core_count": 1 00:12:47.138 } 00:12:47.138 04:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.138 04:29:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71265 00:12:47.138 04:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71265 ']' 00:12:47.138 04:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71265 00:12:47.138 04:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:47.138 04:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:47.138 04:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71265 00:12:47.138 04:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:47.138 04:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:47.138 killing process with pid 71265 00:12:47.138 04:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71265' 00:12:47.138 04:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71265 00:12:47.138 [2024-11-27 04:29:43.490852] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:47.138 04:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71265 00:12:47.396 [2024-11-27 04:29:43.884935] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:48.794 04:29:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.G6DAcijcwu 00:12:48.794 04:29:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:48.794 04:29:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:48.794 04:29:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:12:48.794 04:29:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:48.794 04:29:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:48.794 04:29:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:48.794 ************************************ 00:12:48.794 END TEST raid_read_error_test 00:12:48.794 ************************************ 00:12:48.794 04:29:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:12:48.794 00:12:48.794 real 0m5.086s 00:12:48.794 user 0m5.892s 00:12:48.794 sys 0m0.686s 00:12:48.794 04:29:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.794 04:29:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.794 04:29:45 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:12:48.794 04:29:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:48.794 04:29:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.794 04:29:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:48.794 ************************************ 00:12:48.794 START TEST raid_write_error_test 00:12:48.794 ************************************ 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:48.794 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ior25fbPpG 00:12:49.053 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71417 00:12:49.053 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:49.053 04:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71417 00:12:49.053 04:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71417 ']' 00:12:49.053 04:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.053 04:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:49.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.053 04:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.053 04:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:49.053 04:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.053 [2024-11-27 04:29:45.471125] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:12:49.053 [2024-11-27 04:29:45.471256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71417 ] 00:12:49.312 [2024-11-27 04:29:45.650821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.312 [2024-11-27 04:29:45.797914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.570 [2024-11-27 04:29:46.047965] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:49.570 [2024-11-27 04:29:46.048018] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:49.830 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:49.830 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:49.830 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:49.830 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:49.830 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.830 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.830 BaseBdev1_malloc 00:12:49.830 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.830 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:49.830 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.830 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.830 true 00:12:49.830 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.830 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:49.830 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.830 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.090 [2024-11-27 04:29:46.414473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:50.090 [2024-11-27 04:29:46.414550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.090 [2024-11-27 04:29:46.414573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:50.090 [2024-11-27 04:29:46.414585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.090 [2024-11-27 04:29:46.417157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.090 [2024-11-27 04:29:46.417196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:50.090 BaseBdev1 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.090 BaseBdev2_malloc 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.090 true 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.090 [2024-11-27 04:29:46.476597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:50.090 [2024-11-27 04:29:46.476763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.090 [2024-11-27 04:29:46.476790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:50.090 [2024-11-27 04:29:46.476804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.090 [2024-11-27 04:29:46.479403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.090 [2024-11-27 04:29:46.479444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:50.090 BaseBdev2 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.090 BaseBdev3_malloc 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.090 true 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.090 [2024-11-27 04:29:46.556878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:50.090 [2024-11-27 04:29:46.557041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.090 [2024-11-27 04:29:46.557071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:50.090 [2024-11-27 04:29:46.557125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.090 [2024-11-27 04:29:46.559913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.090 [2024-11-27 04:29:46.559960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:50.090 BaseBdev3 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.090 BaseBdev4_malloc 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.090 true 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.090 [2024-11-27 04:29:46.623426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:50.090 [2024-11-27 04:29:46.623499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.090 [2024-11-27 04:29:46.623521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:50.090 [2024-11-27 04:29:46.623534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.090 [2024-11-27 04:29:46.626208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.090 [2024-11-27 04:29:46.626249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:50.090 BaseBdev4 00:12:50.090 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.091 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:50.091 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.091 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.091 [2024-11-27 04:29:46.631498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:50.091 [2024-11-27 04:29:46.633898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:50.091 [2024-11-27 04:29:46.633977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:50.091 [2024-11-27 04:29:46.634041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:50.091 [2024-11-27 04:29:46.634280] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:50.091 [2024-11-27 04:29:46.634299] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:50.091 [2024-11-27 04:29:46.634562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:50.091 [2024-11-27 04:29:46.634742] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:50.091 [2024-11-27 04:29:46.634754] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:50.091 [2024-11-27 04:29:46.634918] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.091 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.091 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:50.091 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.091 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.091 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:50.091 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.091 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.091 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.091 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.091 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.091 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.091 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.091 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.091 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.091 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.091 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.350 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.350 "name": "raid_bdev1", 00:12:50.350 "uuid": "5117ec63-d450-487a-b8f6-915dff931c90", 00:12:50.350 "strip_size_kb": 64, 00:12:50.350 "state": "online", 00:12:50.350 "raid_level": "raid0", 00:12:50.350 "superblock": true, 00:12:50.350 "num_base_bdevs": 4, 00:12:50.350 "num_base_bdevs_discovered": 4, 00:12:50.350 "num_base_bdevs_operational": 4, 00:12:50.350 "base_bdevs_list": [ 00:12:50.350 { 00:12:50.350 "name": "BaseBdev1", 00:12:50.350 "uuid": "8b2b05f8-87a1-5715-bd49-bdaece760410", 00:12:50.350 "is_configured": true, 00:12:50.350 "data_offset": 2048, 00:12:50.350 "data_size": 63488 00:12:50.350 }, 00:12:50.350 { 00:12:50.350 "name": "BaseBdev2", 00:12:50.350 "uuid": "9cc56d20-5d1a-5163-b724-f3159a9e2aaf", 00:12:50.350 "is_configured": true, 00:12:50.350 "data_offset": 2048, 00:12:50.350 "data_size": 63488 00:12:50.350 }, 00:12:50.350 { 00:12:50.350 "name": "BaseBdev3", 00:12:50.350 "uuid": "616abf11-7777-51e8-8cb9-5f0245a990dc", 00:12:50.350 "is_configured": true, 00:12:50.350 "data_offset": 2048, 00:12:50.350 "data_size": 63488 00:12:50.350 }, 00:12:50.350 { 00:12:50.350 "name": "BaseBdev4", 00:12:50.350 "uuid": "1286ea75-269c-588d-a68a-8d8cd5e7c2fb", 00:12:50.350 "is_configured": true, 00:12:50.350 "data_offset": 2048, 00:12:50.350 "data_size": 63488 00:12:50.350 } 00:12:50.350 ] 00:12:50.350 }' 00:12:50.350 04:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.350 04:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.608 04:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:50.608 04:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:50.608 [2024-11-27 04:29:47.183954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:51.544 04:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:51.544 04:29:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.544 04:29:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.544 04:29:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.544 04:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:51.544 04:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:51.544 04:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:51.544 04:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:51.544 04:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:51.544 04:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.544 04:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:51.544 04:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.544 04:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.544 04:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.544 04:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.544 04:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.544 04:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.544 04:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.544 04:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.544 04:29:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.544 04:29:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.802 04:29:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.802 04:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.802 "name": "raid_bdev1", 00:12:51.802 "uuid": "5117ec63-d450-487a-b8f6-915dff931c90", 00:12:51.802 "strip_size_kb": 64, 00:12:51.802 "state": "online", 00:12:51.802 "raid_level": "raid0", 00:12:51.802 "superblock": true, 00:12:51.802 "num_base_bdevs": 4, 00:12:51.802 "num_base_bdevs_discovered": 4, 00:12:51.802 "num_base_bdevs_operational": 4, 00:12:51.802 "base_bdevs_list": [ 00:12:51.802 { 00:12:51.802 "name": "BaseBdev1", 00:12:51.802 "uuid": "8b2b05f8-87a1-5715-bd49-bdaece760410", 00:12:51.802 "is_configured": true, 00:12:51.802 "data_offset": 2048, 00:12:51.802 "data_size": 63488 00:12:51.802 }, 00:12:51.802 { 00:12:51.802 "name": "BaseBdev2", 00:12:51.802 "uuid": "9cc56d20-5d1a-5163-b724-f3159a9e2aaf", 00:12:51.802 "is_configured": true, 00:12:51.802 "data_offset": 2048, 00:12:51.802 "data_size": 63488 00:12:51.802 }, 00:12:51.802 { 00:12:51.802 "name": "BaseBdev3", 00:12:51.802 "uuid": "616abf11-7777-51e8-8cb9-5f0245a990dc", 00:12:51.802 "is_configured": true, 00:12:51.802 "data_offset": 2048, 00:12:51.802 "data_size": 63488 00:12:51.803 }, 00:12:51.803 { 00:12:51.803 "name": "BaseBdev4", 00:12:51.803 "uuid": "1286ea75-269c-588d-a68a-8d8cd5e7c2fb", 00:12:51.803 "is_configured": true, 00:12:51.803 "data_offset": 2048, 00:12:51.803 "data_size": 63488 00:12:51.803 } 00:12:51.803 ] 00:12:51.803 }' 00:12:51.803 04:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.803 04:29:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.060 04:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:52.060 04:29:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.060 04:29:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.060 [2024-11-27 04:29:48.533179] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:52.060 [2024-11-27 04:29:48.533340] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:52.060 [2024-11-27 04:29:48.536509] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:52.060 [2024-11-27 04:29:48.536643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.060 [2024-11-27 04:29:48.536713] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:52.061 [2024-11-27 04:29:48.536761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:52.061 { 00:12:52.061 "results": [ 00:12:52.061 { 00:12:52.061 "job": "raid_bdev1", 00:12:52.061 "core_mask": "0x1", 00:12:52.061 "workload": "randrw", 00:12:52.061 "percentage": 50, 00:12:52.061 "status": "finished", 00:12:52.061 "queue_depth": 1, 00:12:52.061 "io_size": 131072, 00:12:52.061 "runtime": 1.349694, 00:12:52.061 "iops": 12325.015892491187, 00:12:52.061 "mibps": 1540.6269865613983, 00:12:52.061 "io_failed": 1, 00:12:52.061 "io_timeout": 0, 00:12:52.061 "avg_latency_us": 113.89006468845908, 00:12:52.061 "min_latency_us": 28.841921397379913, 00:12:52.061 "max_latency_us": 1430.9170305676855 00:12:52.061 } 00:12:52.061 ], 00:12:52.061 "core_count": 1 00:12:52.061 } 00:12:52.061 04:29:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.061 04:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71417 00:12:52.061 04:29:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71417 ']' 00:12:52.061 04:29:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71417 00:12:52.061 04:29:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:52.061 04:29:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.061 04:29:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71417 00:12:52.061 04:29:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:52.061 04:29:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:52.061 04:29:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71417' 00:12:52.061 killing process with pid 71417 00:12:52.061 04:29:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71417 00:12:52.061 [2024-11-27 04:29:48.582363] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:52.061 04:29:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71417 00:12:52.626 [2024-11-27 04:29:48.971839] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:54.001 04:29:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:54.001 04:29:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ior25fbPpG 00:12:54.001 04:29:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:54.001 04:29:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:12:54.001 04:29:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:54.001 ************************************ 00:12:54.001 END TEST raid_write_error_test 00:12:54.001 ************************************ 00:12:54.001 04:29:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:54.001 04:29:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:54.001 04:29:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:12:54.001 00:12:54.001 real 0m5.039s 00:12:54.001 user 0m5.805s 00:12:54.001 sys 0m0.697s 00:12:54.001 04:29:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.001 04:29:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.001 04:29:50 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:54.001 04:29:50 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:12:54.001 04:29:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:54.001 04:29:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.001 04:29:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:54.001 ************************************ 00:12:54.001 START TEST raid_state_function_test 00:12:54.001 ************************************ 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:54.001 Process raid pid: 71566 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71566 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71566' 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71566 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71566 ']' 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.001 04:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:54.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.002 04:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.002 04:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:54.002 04:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.002 [2024-11-27 04:29:50.560161] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:12:54.002 [2024-11-27 04:29:50.560403] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.260 [2024-11-27 04:29:50.752784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.519 [2024-11-27 04:29:50.908691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.777 [2024-11-27 04:29:51.183115] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.777 [2024-11-27 04:29:51.183177] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.037 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:55.037 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:55.037 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:55.037 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.037 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.037 [2024-11-27 04:29:51.434326] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:55.037 [2024-11-27 04:29:51.434408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:55.037 [2024-11-27 04:29:51.434419] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:55.037 [2024-11-27 04:29:51.434431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:55.037 [2024-11-27 04:29:51.434438] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:55.037 [2024-11-27 04:29:51.434449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:55.037 [2024-11-27 04:29:51.434455] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:55.037 [2024-11-27 04:29:51.434465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:55.037 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.037 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:55.037 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.037 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.037 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:55.037 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.037 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:55.037 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.037 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.037 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.037 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.037 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.037 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.037 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.037 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.037 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.037 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.037 "name": "Existed_Raid", 00:12:55.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.037 "strip_size_kb": 64, 00:12:55.037 "state": "configuring", 00:12:55.037 "raid_level": "concat", 00:12:55.037 "superblock": false, 00:12:55.037 "num_base_bdevs": 4, 00:12:55.037 "num_base_bdevs_discovered": 0, 00:12:55.037 "num_base_bdevs_operational": 4, 00:12:55.037 "base_bdevs_list": [ 00:12:55.037 { 00:12:55.037 "name": "BaseBdev1", 00:12:55.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.037 "is_configured": false, 00:12:55.037 "data_offset": 0, 00:12:55.037 "data_size": 0 00:12:55.037 }, 00:12:55.037 { 00:12:55.037 "name": "BaseBdev2", 00:12:55.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.037 "is_configured": false, 00:12:55.037 "data_offset": 0, 00:12:55.037 "data_size": 0 00:12:55.037 }, 00:12:55.037 { 00:12:55.037 "name": "BaseBdev3", 00:12:55.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.037 "is_configured": false, 00:12:55.037 "data_offset": 0, 00:12:55.037 "data_size": 0 00:12:55.038 }, 00:12:55.038 { 00:12:55.038 "name": "BaseBdev4", 00:12:55.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.038 "is_configured": false, 00:12:55.038 "data_offset": 0, 00:12:55.038 "data_size": 0 00:12:55.038 } 00:12:55.038 ] 00:12:55.038 }' 00:12:55.038 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.038 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.296 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:55.296 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.296 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.296 [2024-11-27 04:29:51.861547] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:55.296 [2024-11-27 04:29:51.861713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:55.296 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.296 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:55.296 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.296 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.296 [2024-11-27 04:29:51.869486] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:55.296 [2024-11-27 04:29:51.869599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:55.296 [2024-11-27 04:29:51.869636] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:55.296 [2024-11-27 04:29:51.869665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:55.296 [2024-11-27 04:29:51.869714] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:55.296 [2024-11-27 04:29:51.869754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:55.296 [2024-11-27 04:29:51.869786] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:55.296 [2024-11-27 04:29:51.869825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:55.296 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.296 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:55.296 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.296 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.556 [2024-11-27 04:29:51.922437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:55.556 BaseBdev1 00:12:55.556 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.556 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:55.556 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:55.556 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:55.556 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:55.556 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:55.556 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:55.556 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:55.556 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.556 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.556 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.556 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:55.556 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.556 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.556 [ 00:12:55.556 { 00:12:55.556 "name": "BaseBdev1", 00:12:55.556 "aliases": [ 00:12:55.556 "60a61142-663e-40f9-9530-01badc64d5e8" 00:12:55.556 ], 00:12:55.557 "product_name": "Malloc disk", 00:12:55.557 "block_size": 512, 00:12:55.557 "num_blocks": 65536, 00:12:55.557 "uuid": "60a61142-663e-40f9-9530-01badc64d5e8", 00:12:55.557 "assigned_rate_limits": { 00:12:55.557 "rw_ios_per_sec": 0, 00:12:55.557 "rw_mbytes_per_sec": 0, 00:12:55.557 "r_mbytes_per_sec": 0, 00:12:55.557 "w_mbytes_per_sec": 0 00:12:55.557 }, 00:12:55.557 "claimed": true, 00:12:55.557 "claim_type": "exclusive_write", 00:12:55.557 "zoned": false, 00:12:55.557 "supported_io_types": { 00:12:55.557 "read": true, 00:12:55.557 "write": true, 00:12:55.557 "unmap": true, 00:12:55.557 "flush": true, 00:12:55.557 "reset": true, 00:12:55.557 "nvme_admin": false, 00:12:55.557 "nvme_io": false, 00:12:55.557 "nvme_io_md": false, 00:12:55.557 "write_zeroes": true, 00:12:55.557 "zcopy": true, 00:12:55.557 "get_zone_info": false, 00:12:55.557 "zone_management": false, 00:12:55.557 "zone_append": false, 00:12:55.557 "compare": false, 00:12:55.557 "compare_and_write": false, 00:12:55.557 "abort": true, 00:12:55.557 "seek_hole": false, 00:12:55.557 "seek_data": false, 00:12:55.557 "copy": true, 00:12:55.557 "nvme_iov_md": false 00:12:55.557 }, 00:12:55.557 "memory_domains": [ 00:12:55.557 { 00:12:55.557 "dma_device_id": "system", 00:12:55.557 "dma_device_type": 1 00:12:55.557 }, 00:12:55.557 { 00:12:55.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.557 "dma_device_type": 2 00:12:55.557 } 00:12:55.557 ], 00:12:55.557 "driver_specific": {} 00:12:55.557 } 00:12:55.557 ] 00:12:55.557 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.557 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:55.557 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:55.557 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.557 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.557 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:55.557 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.557 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:55.557 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.557 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.557 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.557 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.557 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.557 04:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.557 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.557 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.557 04:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.557 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.557 "name": "Existed_Raid", 00:12:55.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.557 "strip_size_kb": 64, 00:12:55.557 "state": "configuring", 00:12:55.557 "raid_level": "concat", 00:12:55.557 "superblock": false, 00:12:55.557 "num_base_bdevs": 4, 00:12:55.557 "num_base_bdevs_discovered": 1, 00:12:55.557 "num_base_bdevs_operational": 4, 00:12:55.557 "base_bdevs_list": [ 00:12:55.557 { 00:12:55.557 "name": "BaseBdev1", 00:12:55.557 "uuid": "60a61142-663e-40f9-9530-01badc64d5e8", 00:12:55.557 "is_configured": true, 00:12:55.557 "data_offset": 0, 00:12:55.557 "data_size": 65536 00:12:55.557 }, 00:12:55.557 { 00:12:55.557 "name": "BaseBdev2", 00:12:55.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.557 "is_configured": false, 00:12:55.557 "data_offset": 0, 00:12:55.557 "data_size": 0 00:12:55.557 }, 00:12:55.557 { 00:12:55.557 "name": "BaseBdev3", 00:12:55.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.557 "is_configured": false, 00:12:55.557 "data_offset": 0, 00:12:55.557 "data_size": 0 00:12:55.557 }, 00:12:55.557 { 00:12:55.557 "name": "BaseBdev4", 00:12:55.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.557 "is_configured": false, 00:12:55.557 "data_offset": 0, 00:12:55.557 "data_size": 0 00:12:55.557 } 00:12:55.557 ] 00:12:55.557 }' 00:12:55.557 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.557 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.127 [2024-11-27 04:29:52.425703] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:56.127 [2024-11-27 04:29:52.425798] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.127 [2024-11-27 04:29:52.433696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:56.127 [2024-11-27 04:29:52.435986] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:56.127 [2024-11-27 04:29:52.436092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:56.127 [2024-11-27 04:29:52.436132] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:56.127 [2024-11-27 04:29:52.436163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:56.127 [2024-11-27 04:29:52.436210] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:56.127 [2024-11-27 04:29:52.436256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.127 "name": "Existed_Raid", 00:12:56.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.127 "strip_size_kb": 64, 00:12:56.127 "state": "configuring", 00:12:56.127 "raid_level": "concat", 00:12:56.127 "superblock": false, 00:12:56.127 "num_base_bdevs": 4, 00:12:56.127 "num_base_bdevs_discovered": 1, 00:12:56.127 "num_base_bdevs_operational": 4, 00:12:56.127 "base_bdevs_list": [ 00:12:56.127 { 00:12:56.127 "name": "BaseBdev1", 00:12:56.127 "uuid": "60a61142-663e-40f9-9530-01badc64d5e8", 00:12:56.127 "is_configured": true, 00:12:56.127 "data_offset": 0, 00:12:56.127 "data_size": 65536 00:12:56.127 }, 00:12:56.127 { 00:12:56.127 "name": "BaseBdev2", 00:12:56.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.127 "is_configured": false, 00:12:56.127 "data_offset": 0, 00:12:56.127 "data_size": 0 00:12:56.127 }, 00:12:56.127 { 00:12:56.127 "name": "BaseBdev3", 00:12:56.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.127 "is_configured": false, 00:12:56.127 "data_offset": 0, 00:12:56.127 "data_size": 0 00:12:56.127 }, 00:12:56.127 { 00:12:56.127 "name": "BaseBdev4", 00:12:56.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.127 "is_configured": false, 00:12:56.127 "data_offset": 0, 00:12:56.127 "data_size": 0 00:12:56.127 } 00:12:56.127 ] 00:12:56.127 }' 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.127 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.386 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:56.386 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.386 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.386 [2024-11-27 04:29:52.928050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:56.386 BaseBdev2 00:12:56.386 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.386 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:56.386 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:56.386 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.387 [ 00:12:56.387 { 00:12:56.387 "name": "BaseBdev2", 00:12:56.387 "aliases": [ 00:12:56.387 "f0e27fdf-23ee-458f-b365-224b958c7d77" 00:12:56.387 ], 00:12:56.387 "product_name": "Malloc disk", 00:12:56.387 "block_size": 512, 00:12:56.387 "num_blocks": 65536, 00:12:56.387 "uuid": "f0e27fdf-23ee-458f-b365-224b958c7d77", 00:12:56.387 "assigned_rate_limits": { 00:12:56.387 "rw_ios_per_sec": 0, 00:12:56.387 "rw_mbytes_per_sec": 0, 00:12:56.387 "r_mbytes_per_sec": 0, 00:12:56.387 "w_mbytes_per_sec": 0 00:12:56.387 }, 00:12:56.387 "claimed": true, 00:12:56.387 "claim_type": "exclusive_write", 00:12:56.387 "zoned": false, 00:12:56.387 "supported_io_types": { 00:12:56.387 "read": true, 00:12:56.387 "write": true, 00:12:56.387 "unmap": true, 00:12:56.387 "flush": true, 00:12:56.387 "reset": true, 00:12:56.387 "nvme_admin": false, 00:12:56.387 "nvme_io": false, 00:12:56.387 "nvme_io_md": false, 00:12:56.387 "write_zeroes": true, 00:12:56.387 "zcopy": true, 00:12:56.387 "get_zone_info": false, 00:12:56.387 "zone_management": false, 00:12:56.387 "zone_append": false, 00:12:56.387 "compare": false, 00:12:56.387 "compare_and_write": false, 00:12:56.387 "abort": true, 00:12:56.387 "seek_hole": false, 00:12:56.387 "seek_data": false, 00:12:56.387 "copy": true, 00:12:56.387 "nvme_iov_md": false 00:12:56.387 }, 00:12:56.387 "memory_domains": [ 00:12:56.387 { 00:12:56.387 "dma_device_id": "system", 00:12:56.387 "dma_device_type": 1 00:12:56.387 }, 00:12:56.387 { 00:12:56.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.387 "dma_device_type": 2 00:12:56.387 } 00:12:56.387 ], 00:12:56.387 "driver_specific": {} 00:12:56.387 } 00:12:56.387 ] 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.387 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.646 04:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.646 04:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.646 "name": "Existed_Raid", 00:12:56.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.646 "strip_size_kb": 64, 00:12:56.646 "state": "configuring", 00:12:56.646 "raid_level": "concat", 00:12:56.646 "superblock": false, 00:12:56.646 "num_base_bdevs": 4, 00:12:56.646 "num_base_bdevs_discovered": 2, 00:12:56.646 "num_base_bdevs_operational": 4, 00:12:56.646 "base_bdevs_list": [ 00:12:56.646 { 00:12:56.646 "name": "BaseBdev1", 00:12:56.646 "uuid": "60a61142-663e-40f9-9530-01badc64d5e8", 00:12:56.646 "is_configured": true, 00:12:56.646 "data_offset": 0, 00:12:56.646 "data_size": 65536 00:12:56.646 }, 00:12:56.646 { 00:12:56.646 "name": "BaseBdev2", 00:12:56.646 "uuid": "f0e27fdf-23ee-458f-b365-224b958c7d77", 00:12:56.646 "is_configured": true, 00:12:56.646 "data_offset": 0, 00:12:56.646 "data_size": 65536 00:12:56.646 }, 00:12:56.646 { 00:12:56.646 "name": "BaseBdev3", 00:12:56.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.646 "is_configured": false, 00:12:56.646 "data_offset": 0, 00:12:56.646 "data_size": 0 00:12:56.646 }, 00:12:56.646 { 00:12:56.646 "name": "BaseBdev4", 00:12:56.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.646 "is_configured": false, 00:12:56.646 "data_offset": 0, 00:12:56.646 "data_size": 0 00:12:56.646 } 00:12:56.646 ] 00:12:56.646 }' 00:12:56.646 04:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.646 04:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.905 04:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:56.905 04:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.905 04:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.163 [2024-11-27 04:29:53.506598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:57.163 BaseBdev3 00:12:57.163 04:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.163 04:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:57.163 04:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:57.163 04:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:57.163 04:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.164 [ 00:12:57.164 { 00:12:57.164 "name": "BaseBdev3", 00:12:57.164 "aliases": [ 00:12:57.164 "7bf2d272-a533-4757-8f13-5597c734d1b4" 00:12:57.164 ], 00:12:57.164 "product_name": "Malloc disk", 00:12:57.164 "block_size": 512, 00:12:57.164 "num_blocks": 65536, 00:12:57.164 "uuid": "7bf2d272-a533-4757-8f13-5597c734d1b4", 00:12:57.164 "assigned_rate_limits": { 00:12:57.164 "rw_ios_per_sec": 0, 00:12:57.164 "rw_mbytes_per_sec": 0, 00:12:57.164 "r_mbytes_per_sec": 0, 00:12:57.164 "w_mbytes_per_sec": 0 00:12:57.164 }, 00:12:57.164 "claimed": true, 00:12:57.164 "claim_type": "exclusive_write", 00:12:57.164 "zoned": false, 00:12:57.164 "supported_io_types": { 00:12:57.164 "read": true, 00:12:57.164 "write": true, 00:12:57.164 "unmap": true, 00:12:57.164 "flush": true, 00:12:57.164 "reset": true, 00:12:57.164 "nvme_admin": false, 00:12:57.164 "nvme_io": false, 00:12:57.164 "nvme_io_md": false, 00:12:57.164 "write_zeroes": true, 00:12:57.164 "zcopy": true, 00:12:57.164 "get_zone_info": false, 00:12:57.164 "zone_management": false, 00:12:57.164 "zone_append": false, 00:12:57.164 "compare": false, 00:12:57.164 "compare_and_write": false, 00:12:57.164 "abort": true, 00:12:57.164 "seek_hole": false, 00:12:57.164 "seek_data": false, 00:12:57.164 "copy": true, 00:12:57.164 "nvme_iov_md": false 00:12:57.164 }, 00:12:57.164 "memory_domains": [ 00:12:57.164 { 00:12:57.164 "dma_device_id": "system", 00:12:57.164 "dma_device_type": 1 00:12:57.164 }, 00:12:57.164 { 00:12:57.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.164 "dma_device_type": 2 00:12:57.164 } 00:12:57.164 ], 00:12:57.164 "driver_specific": {} 00:12:57.164 } 00:12:57.164 ] 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.164 "name": "Existed_Raid", 00:12:57.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.164 "strip_size_kb": 64, 00:12:57.164 "state": "configuring", 00:12:57.164 "raid_level": "concat", 00:12:57.164 "superblock": false, 00:12:57.164 "num_base_bdevs": 4, 00:12:57.164 "num_base_bdevs_discovered": 3, 00:12:57.164 "num_base_bdevs_operational": 4, 00:12:57.164 "base_bdevs_list": [ 00:12:57.164 { 00:12:57.164 "name": "BaseBdev1", 00:12:57.164 "uuid": "60a61142-663e-40f9-9530-01badc64d5e8", 00:12:57.164 "is_configured": true, 00:12:57.164 "data_offset": 0, 00:12:57.164 "data_size": 65536 00:12:57.164 }, 00:12:57.164 { 00:12:57.164 "name": "BaseBdev2", 00:12:57.164 "uuid": "f0e27fdf-23ee-458f-b365-224b958c7d77", 00:12:57.164 "is_configured": true, 00:12:57.164 "data_offset": 0, 00:12:57.164 "data_size": 65536 00:12:57.164 }, 00:12:57.164 { 00:12:57.164 "name": "BaseBdev3", 00:12:57.164 "uuid": "7bf2d272-a533-4757-8f13-5597c734d1b4", 00:12:57.164 "is_configured": true, 00:12:57.164 "data_offset": 0, 00:12:57.164 "data_size": 65536 00:12:57.164 }, 00:12:57.164 { 00:12:57.164 "name": "BaseBdev4", 00:12:57.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.164 "is_configured": false, 00:12:57.164 "data_offset": 0, 00:12:57.164 "data_size": 0 00:12:57.164 } 00:12:57.164 ] 00:12:57.164 }' 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.164 04:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.422 04:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:57.422 04:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.422 04:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.682 [2024-11-27 04:29:54.024996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:57.682 [2024-11-27 04:29:54.025061] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:57.682 [2024-11-27 04:29:54.025071] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:57.682 [2024-11-27 04:29:54.025436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:57.682 [2024-11-27 04:29:54.025631] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:57.682 [2024-11-27 04:29:54.025650] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:57.682 [2024-11-27 04:29:54.025952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.682 BaseBdev4 00:12:57.682 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.682 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:57.682 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:57.682 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:57.682 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:57.682 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:57.682 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:57.682 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:57.682 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.682 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.682 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.682 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:57.682 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.682 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.682 [ 00:12:57.682 { 00:12:57.682 "name": "BaseBdev4", 00:12:57.682 "aliases": [ 00:12:57.682 "64404add-2f93-4943-b342-b5461772df84" 00:12:57.682 ], 00:12:57.682 "product_name": "Malloc disk", 00:12:57.682 "block_size": 512, 00:12:57.682 "num_blocks": 65536, 00:12:57.682 "uuid": "64404add-2f93-4943-b342-b5461772df84", 00:12:57.682 "assigned_rate_limits": { 00:12:57.682 "rw_ios_per_sec": 0, 00:12:57.682 "rw_mbytes_per_sec": 0, 00:12:57.682 "r_mbytes_per_sec": 0, 00:12:57.682 "w_mbytes_per_sec": 0 00:12:57.682 }, 00:12:57.682 "claimed": true, 00:12:57.682 "claim_type": "exclusive_write", 00:12:57.682 "zoned": false, 00:12:57.682 "supported_io_types": { 00:12:57.682 "read": true, 00:12:57.682 "write": true, 00:12:57.682 "unmap": true, 00:12:57.682 "flush": true, 00:12:57.682 "reset": true, 00:12:57.682 "nvme_admin": false, 00:12:57.682 "nvme_io": false, 00:12:57.682 "nvme_io_md": false, 00:12:57.682 "write_zeroes": true, 00:12:57.682 "zcopy": true, 00:12:57.682 "get_zone_info": false, 00:12:57.682 "zone_management": false, 00:12:57.682 "zone_append": false, 00:12:57.682 "compare": false, 00:12:57.683 "compare_and_write": false, 00:12:57.683 "abort": true, 00:12:57.683 "seek_hole": false, 00:12:57.683 "seek_data": false, 00:12:57.683 "copy": true, 00:12:57.683 "nvme_iov_md": false 00:12:57.683 }, 00:12:57.683 "memory_domains": [ 00:12:57.683 { 00:12:57.683 "dma_device_id": "system", 00:12:57.683 "dma_device_type": 1 00:12:57.683 }, 00:12:57.683 { 00:12:57.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.683 "dma_device_type": 2 00:12:57.683 } 00:12:57.683 ], 00:12:57.683 "driver_specific": {} 00:12:57.683 } 00:12:57.683 ] 00:12:57.683 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.683 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:57.683 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:57.683 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:57.683 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:57.683 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.683 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.683 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:57.683 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.683 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:57.683 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.683 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.683 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.683 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.683 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.683 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.683 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.683 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.683 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.683 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.683 "name": "Existed_Raid", 00:12:57.683 "uuid": "f256cdaa-72d8-484d-9c2e-4cfa99f0800d", 00:12:57.683 "strip_size_kb": 64, 00:12:57.683 "state": "online", 00:12:57.683 "raid_level": "concat", 00:12:57.683 "superblock": false, 00:12:57.683 "num_base_bdevs": 4, 00:12:57.683 "num_base_bdevs_discovered": 4, 00:12:57.683 "num_base_bdevs_operational": 4, 00:12:57.683 "base_bdevs_list": [ 00:12:57.683 { 00:12:57.683 "name": "BaseBdev1", 00:12:57.683 "uuid": "60a61142-663e-40f9-9530-01badc64d5e8", 00:12:57.683 "is_configured": true, 00:12:57.683 "data_offset": 0, 00:12:57.683 "data_size": 65536 00:12:57.683 }, 00:12:57.683 { 00:12:57.683 "name": "BaseBdev2", 00:12:57.683 "uuid": "f0e27fdf-23ee-458f-b365-224b958c7d77", 00:12:57.683 "is_configured": true, 00:12:57.683 "data_offset": 0, 00:12:57.683 "data_size": 65536 00:12:57.683 }, 00:12:57.683 { 00:12:57.683 "name": "BaseBdev3", 00:12:57.683 "uuid": "7bf2d272-a533-4757-8f13-5597c734d1b4", 00:12:57.683 "is_configured": true, 00:12:57.683 "data_offset": 0, 00:12:57.683 "data_size": 65536 00:12:57.683 }, 00:12:57.683 { 00:12:57.683 "name": "BaseBdev4", 00:12:57.683 "uuid": "64404add-2f93-4943-b342-b5461772df84", 00:12:57.683 "is_configured": true, 00:12:57.683 "data_offset": 0, 00:12:57.683 "data_size": 65536 00:12:57.683 } 00:12:57.683 ] 00:12:57.683 }' 00:12:57.683 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.683 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.941 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:57.941 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:57.941 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:57.941 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:57.941 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:57.941 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:57.941 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:57.941 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:57.941 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.941 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.941 [2024-11-27 04:29:54.508696] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:58.199 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.199 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:58.199 "name": "Existed_Raid", 00:12:58.199 "aliases": [ 00:12:58.199 "f256cdaa-72d8-484d-9c2e-4cfa99f0800d" 00:12:58.199 ], 00:12:58.199 "product_name": "Raid Volume", 00:12:58.199 "block_size": 512, 00:12:58.199 "num_blocks": 262144, 00:12:58.199 "uuid": "f256cdaa-72d8-484d-9c2e-4cfa99f0800d", 00:12:58.199 "assigned_rate_limits": { 00:12:58.199 "rw_ios_per_sec": 0, 00:12:58.199 "rw_mbytes_per_sec": 0, 00:12:58.199 "r_mbytes_per_sec": 0, 00:12:58.199 "w_mbytes_per_sec": 0 00:12:58.199 }, 00:12:58.199 "claimed": false, 00:12:58.200 "zoned": false, 00:12:58.200 "supported_io_types": { 00:12:58.200 "read": true, 00:12:58.200 "write": true, 00:12:58.200 "unmap": true, 00:12:58.200 "flush": true, 00:12:58.200 "reset": true, 00:12:58.200 "nvme_admin": false, 00:12:58.200 "nvme_io": false, 00:12:58.200 "nvme_io_md": false, 00:12:58.200 "write_zeroes": true, 00:12:58.200 "zcopy": false, 00:12:58.200 "get_zone_info": false, 00:12:58.200 "zone_management": false, 00:12:58.200 "zone_append": false, 00:12:58.200 "compare": false, 00:12:58.200 "compare_and_write": false, 00:12:58.200 "abort": false, 00:12:58.200 "seek_hole": false, 00:12:58.200 "seek_data": false, 00:12:58.200 "copy": false, 00:12:58.200 "nvme_iov_md": false 00:12:58.200 }, 00:12:58.200 "memory_domains": [ 00:12:58.200 { 00:12:58.200 "dma_device_id": "system", 00:12:58.200 "dma_device_type": 1 00:12:58.200 }, 00:12:58.200 { 00:12:58.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.200 "dma_device_type": 2 00:12:58.200 }, 00:12:58.200 { 00:12:58.200 "dma_device_id": "system", 00:12:58.200 "dma_device_type": 1 00:12:58.200 }, 00:12:58.200 { 00:12:58.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.200 "dma_device_type": 2 00:12:58.200 }, 00:12:58.200 { 00:12:58.200 "dma_device_id": "system", 00:12:58.200 "dma_device_type": 1 00:12:58.200 }, 00:12:58.200 { 00:12:58.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.200 "dma_device_type": 2 00:12:58.200 }, 00:12:58.200 { 00:12:58.200 "dma_device_id": "system", 00:12:58.200 "dma_device_type": 1 00:12:58.200 }, 00:12:58.200 { 00:12:58.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.200 "dma_device_type": 2 00:12:58.200 } 00:12:58.200 ], 00:12:58.200 "driver_specific": { 00:12:58.200 "raid": { 00:12:58.200 "uuid": "f256cdaa-72d8-484d-9c2e-4cfa99f0800d", 00:12:58.200 "strip_size_kb": 64, 00:12:58.200 "state": "online", 00:12:58.200 "raid_level": "concat", 00:12:58.200 "superblock": false, 00:12:58.200 "num_base_bdevs": 4, 00:12:58.200 "num_base_bdevs_discovered": 4, 00:12:58.200 "num_base_bdevs_operational": 4, 00:12:58.200 "base_bdevs_list": [ 00:12:58.200 { 00:12:58.200 "name": "BaseBdev1", 00:12:58.200 "uuid": "60a61142-663e-40f9-9530-01badc64d5e8", 00:12:58.200 "is_configured": true, 00:12:58.200 "data_offset": 0, 00:12:58.200 "data_size": 65536 00:12:58.200 }, 00:12:58.200 { 00:12:58.200 "name": "BaseBdev2", 00:12:58.200 "uuid": "f0e27fdf-23ee-458f-b365-224b958c7d77", 00:12:58.200 "is_configured": true, 00:12:58.200 "data_offset": 0, 00:12:58.200 "data_size": 65536 00:12:58.200 }, 00:12:58.200 { 00:12:58.200 "name": "BaseBdev3", 00:12:58.200 "uuid": "7bf2d272-a533-4757-8f13-5597c734d1b4", 00:12:58.200 "is_configured": true, 00:12:58.200 "data_offset": 0, 00:12:58.200 "data_size": 65536 00:12:58.200 }, 00:12:58.200 { 00:12:58.200 "name": "BaseBdev4", 00:12:58.200 "uuid": "64404add-2f93-4943-b342-b5461772df84", 00:12:58.200 "is_configured": true, 00:12:58.200 "data_offset": 0, 00:12:58.200 "data_size": 65536 00:12:58.200 } 00:12:58.200 ] 00:12:58.200 } 00:12:58.200 } 00:12:58.200 }' 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:58.200 BaseBdev2 00:12:58.200 BaseBdev3 00:12:58.200 BaseBdev4' 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.200 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.459 [2024-11-27 04:29:54.811881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:58.459 [2024-11-27 04:29:54.811938] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:58.459 [2024-11-27 04:29:54.812008] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.459 "name": "Existed_Raid", 00:12:58.459 "uuid": "f256cdaa-72d8-484d-9c2e-4cfa99f0800d", 00:12:58.459 "strip_size_kb": 64, 00:12:58.459 "state": "offline", 00:12:58.459 "raid_level": "concat", 00:12:58.459 "superblock": false, 00:12:58.459 "num_base_bdevs": 4, 00:12:58.459 "num_base_bdevs_discovered": 3, 00:12:58.459 "num_base_bdevs_operational": 3, 00:12:58.459 "base_bdevs_list": [ 00:12:58.459 { 00:12:58.459 "name": null, 00:12:58.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.459 "is_configured": false, 00:12:58.459 "data_offset": 0, 00:12:58.459 "data_size": 65536 00:12:58.459 }, 00:12:58.459 { 00:12:58.459 "name": "BaseBdev2", 00:12:58.459 "uuid": "f0e27fdf-23ee-458f-b365-224b958c7d77", 00:12:58.459 "is_configured": true, 00:12:58.459 "data_offset": 0, 00:12:58.459 "data_size": 65536 00:12:58.459 }, 00:12:58.459 { 00:12:58.459 "name": "BaseBdev3", 00:12:58.459 "uuid": "7bf2d272-a533-4757-8f13-5597c734d1b4", 00:12:58.459 "is_configured": true, 00:12:58.459 "data_offset": 0, 00:12:58.459 "data_size": 65536 00:12:58.459 }, 00:12:58.459 { 00:12:58.459 "name": "BaseBdev4", 00:12:58.459 "uuid": "64404add-2f93-4943-b342-b5461772df84", 00:12:58.459 "is_configured": true, 00:12:58.459 "data_offset": 0, 00:12:58.459 "data_size": 65536 00:12:58.459 } 00:12:58.459 ] 00:12:58.459 }' 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.459 04:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.025 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:59.025 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:59.025 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.025 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.025 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.025 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:59.025 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.025 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:59.025 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:59.025 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:59.025 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.025 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.025 [2024-11-27 04:29:55.405027] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:59.025 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.025 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:59.025 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:59.025 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.025 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:59.025 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.025 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.025 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.025 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:59.025 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:59.025 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:59.025 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.025 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.025 [2024-11-27 04:29:55.577857] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:59.285 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.285 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:59.285 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:59.285 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.285 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.285 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.285 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:59.285 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.285 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:59.285 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:59.285 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:59.285 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.285 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.285 [2024-11-27 04:29:55.748737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:59.285 [2024-11-27 04:29:55.748902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:59.285 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.285 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:59.285 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:59.544 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.544 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:59.544 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.544 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.544 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.544 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:59.544 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:59.544 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:59.544 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:59.544 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:59.544 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:59.544 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.544 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.544 BaseBdev2 00:12:59.544 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.544 04:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:59.544 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:59.544 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:59.544 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:59.544 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:59.544 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:59.544 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:59.544 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.544 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.545 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.545 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:59.545 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.545 04:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.545 [ 00:12:59.545 { 00:12:59.545 "name": "BaseBdev2", 00:12:59.545 "aliases": [ 00:12:59.545 "7d3a72eb-fc37-41b3-8f8b-e13c262c16c6" 00:12:59.545 ], 00:12:59.545 "product_name": "Malloc disk", 00:12:59.545 "block_size": 512, 00:12:59.545 "num_blocks": 65536, 00:12:59.545 "uuid": "7d3a72eb-fc37-41b3-8f8b-e13c262c16c6", 00:12:59.545 "assigned_rate_limits": { 00:12:59.545 "rw_ios_per_sec": 0, 00:12:59.545 "rw_mbytes_per_sec": 0, 00:12:59.545 "r_mbytes_per_sec": 0, 00:12:59.545 "w_mbytes_per_sec": 0 00:12:59.545 }, 00:12:59.545 "claimed": false, 00:12:59.545 "zoned": false, 00:12:59.545 "supported_io_types": { 00:12:59.545 "read": true, 00:12:59.545 "write": true, 00:12:59.545 "unmap": true, 00:12:59.545 "flush": true, 00:12:59.545 "reset": true, 00:12:59.545 "nvme_admin": false, 00:12:59.545 "nvme_io": false, 00:12:59.545 "nvme_io_md": false, 00:12:59.545 "write_zeroes": true, 00:12:59.545 "zcopy": true, 00:12:59.545 "get_zone_info": false, 00:12:59.545 "zone_management": false, 00:12:59.545 "zone_append": false, 00:12:59.545 "compare": false, 00:12:59.545 "compare_and_write": false, 00:12:59.545 "abort": true, 00:12:59.545 "seek_hole": false, 00:12:59.545 "seek_data": false, 00:12:59.545 "copy": true, 00:12:59.545 "nvme_iov_md": false 00:12:59.545 }, 00:12:59.545 "memory_domains": [ 00:12:59.545 { 00:12:59.545 "dma_device_id": "system", 00:12:59.545 "dma_device_type": 1 00:12:59.545 }, 00:12:59.545 { 00:12:59.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.545 "dma_device_type": 2 00:12:59.545 } 00:12:59.545 ], 00:12:59.545 "driver_specific": {} 00:12:59.545 } 00:12:59.545 ] 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.545 BaseBdev3 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.545 [ 00:12:59.545 { 00:12:59.545 "name": "BaseBdev3", 00:12:59.545 "aliases": [ 00:12:59.545 "c4c7e60b-b9f1-4ba8-b94f-e92f3ff8aa0b" 00:12:59.545 ], 00:12:59.545 "product_name": "Malloc disk", 00:12:59.545 "block_size": 512, 00:12:59.545 "num_blocks": 65536, 00:12:59.545 "uuid": "c4c7e60b-b9f1-4ba8-b94f-e92f3ff8aa0b", 00:12:59.545 "assigned_rate_limits": { 00:12:59.545 "rw_ios_per_sec": 0, 00:12:59.545 "rw_mbytes_per_sec": 0, 00:12:59.545 "r_mbytes_per_sec": 0, 00:12:59.545 "w_mbytes_per_sec": 0 00:12:59.545 }, 00:12:59.545 "claimed": false, 00:12:59.545 "zoned": false, 00:12:59.545 "supported_io_types": { 00:12:59.545 "read": true, 00:12:59.545 "write": true, 00:12:59.545 "unmap": true, 00:12:59.545 "flush": true, 00:12:59.545 "reset": true, 00:12:59.545 "nvme_admin": false, 00:12:59.545 "nvme_io": false, 00:12:59.545 "nvme_io_md": false, 00:12:59.545 "write_zeroes": true, 00:12:59.545 "zcopy": true, 00:12:59.545 "get_zone_info": false, 00:12:59.545 "zone_management": false, 00:12:59.545 "zone_append": false, 00:12:59.545 "compare": false, 00:12:59.545 "compare_and_write": false, 00:12:59.545 "abort": true, 00:12:59.545 "seek_hole": false, 00:12:59.545 "seek_data": false, 00:12:59.545 "copy": true, 00:12:59.545 "nvme_iov_md": false 00:12:59.545 }, 00:12:59.545 "memory_domains": [ 00:12:59.545 { 00:12:59.545 "dma_device_id": "system", 00:12:59.545 "dma_device_type": 1 00:12:59.545 }, 00:12:59.545 { 00:12:59.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.545 "dma_device_type": 2 00:12:59.545 } 00:12:59.545 ], 00:12:59.545 "driver_specific": {} 00:12:59.545 } 00:12:59.545 ] 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.545 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.805 BaseBdev4 00:12:59.805 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.805 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:59.805 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:59.805 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:59.805 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:59.805 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:59.805 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:59.805 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:59.805 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.805 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.805 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.805 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:59.805 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.805 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.805 [ 00:12:59.805 { 00:12:59.805 "name": "BaseBdev4", 00:12:59.805 "aliases": [ 00:12:59.805 "ca2f31fd-43e5-45f2-bc23-1b7173d95b1e" 00:12:59.805 ], 00:12:59.805 "product_name": "Malloc disk", 00:12:59.805 "block_size": 512, 00:12:59.805 "num_blocks": 65536, 00:12:59.805 "uuid": "ca2f31fd-43e5-45f2-bc23-1b7173d95b1e", 00:12:59.805 "assigned_rate_limits": { 00:12:59.805 "rw_ios_per_sec": 0, 00:12:59.805 "rw_mbytes_per_sec": 0, 00:12:59.805 "r_mbytes_per_sec": 0, 00:12:59.805 "w_mbytes_per_sec": 0 00:12:59.805 }, 00:12:59.805 "claimed": false, 00:12:59.805 "zoned": false, 00:12:59.805 "supported_io_types": { 00:12:59.805 "read": true, 00:12:59.805 "write": true, 00:12:59.805 "unmap": true, 00:12:59.805 "flush": true, 00:12:59.805 "reset": true, 00:12:59.805 "nvme_admin": false, 00:12:59.805 "nvme_io": false, 00:12:59.805 "nvme_io_md": false, 00:12:59.805 "write_zeroes": true, 00:12:59.805 "zcopy": true, 00:12:59.805 "get_zone_info": false, 00:12:59.805 "zone_management": false, 00:12:59.805 "zone_append": false, 00:12:59.805 "compare": false, 00:12:59.805 "compare_and_write": false, 00:12:59.805 "abort": true, 00:12:59.805 "seek_hole": false, 00:12:59.805 "seek_data": false, 00:12:59.805 "copy": true, 00:12:59.805 "nvme_iov_md": false 00:12:59.805 }, 00:12:59.805 "memory_domains": [ 00:12:59.805 { 00:12:59.805 "dma_device_id": "system", 00:12:59.805 "dma_device_type": 1 00:12:59.805 }, 00:12:59.805 { 00:12:59.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.805 "dma_device_type": 2 00:12:59.805 } 00:12:59.805 ], 00:12:59.805 "driver_specific": {} 00:12:59.805 } 00:12:59.805 ] 00:12:59.805 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.805 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:59.805 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:59.805 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:59.805 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:59.805 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.806 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.806 [2024-11-27 04:29:56.190259] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:59.806 [2024-11-27 04:29:56.190341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:59.806 [2024-11-27 04:29:56.190375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:59.806 [2024-11-27 04:29:56.192764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:59.806 [2024-11-27 04:29:56.192933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:59.806 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.806 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:59.806 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.806 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.806 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:59.806 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.806 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:59.806 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.806 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.806 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.806 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.806 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.806 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.806 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.806 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.806 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.806 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.806 "name": "Existed_Raid", 00:12:59.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.806 "strip_size_kb": 64, 00:12:59.806 "state": "configuring", 00:12:59.806 "raid_level": "concat", 00:12:59.806 "superblock": false, 00:12:59.806 "num_base_bdevs": 4, 00:12:59.806 "num_base_bdevs_discovered": 3, 00:12:59.806 "num_base_bdevs_operational": 4, 00:12:59.806 "base_bdevs_list": [ 00:12:59.806 { 00:12:59.806 "name": "BaseBdev1", 00:12:59.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.806 "is_configured": false, 00:12:59.806 "data_offset": 0, 00:12:59.806 "data_size": 0 00:12:59.806 }, 00:12:59.806 { 00:12:59.806 "name": "BaseBdev2", 00:12:59.806 "uuid": "7d3a72eb-fc37-41b3-8f8b-e13c262c16c6", 00:12:59.806 "is_configured": true, 00:12:59.806 "data_offset": 0, 00:12:59.806 "data_size": 65536 00:12:59.806 }, 00:12:59.806 { 00:12:59.806 "name": "BaseBdev3", 00:12:59.806 "uuid": "c4c7e60b-b9f1-4ba8-b94f-e92f3ff8aa0b", 00:12:59.806 "is_configured": true, 00:12:59.806 "data_offset": 0, 00:12:59.806 "data_size": 65536 00:12:59.806 }, 00:12:59.806 { 00:12:59.806 "name": "BaseBdev4", 00:12:59.806 "uuid": "ca2f31fd-43e5-45f2-bc23-1b7173d95b1e", 00:12:59.806 "is_configured": true, 00:12:59.806 "data_offset": 0, 00:12:59.806 "data_size": 65536 00:12:59.806 } 00:12:59.806 ] 00:12:59.806 }' 00:12:59.806 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.806 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.065 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:00.065 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.065 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.065 [2024-11-27 04:29:56.629521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:00.065 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.065 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:00.065 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.065 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.065 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:00.066 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.066 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:00.066 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.066 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.066 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.066 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.066 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.066 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.066 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.066 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.325 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.325 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.325 "name": "Existed_Raid", 00:13:00.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.325 "strip_size_kb": 64, 00:13:00.325 "state": "configuring", 00:13:00.325 "raid_level": "concat", 00:13:00.325 "superblock": false, 00:13:00.325 "num_base_bdevs": 4, 00:13:00.325 "num_base_bdevs_discovered": 2, 00:13:00.325 "num_base_bdevs_operational": 4, 00:13:00.325 "base_bdevs_list": [ 00:13:00.325 { 00:13:00.325 "name": "BaseBdev1", 00:13:00.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.325 "is_configured": false, 00:13:00.325 "data_offset": 0, 00:13:00.325 "data_size": 0 00:13:00.325 }, 00:13:00.325 { 00:13:00.325 "name": null, 00:13:00.325 "uuid": "7d3a72eb-fc37-41b3-8f8b-e13c262c16c6", 00:13:00.325 "is_configured": false, 00:13:00.325 "data_offset": 0, 00:13:00.325 "data_size": 65536 00:13:00.325 }, 00:13:00.325 { 00:13:00.325 "name": "BaseBdev3", 00:13:00.325 "uuid": "c4c7e60b-b9f1-4ba8-b94f-e92f3ff8aa0b", 00:13:00.325 "is_configured": true, 00:13:00.325 "data_offset": 0, 00:13:00.325 "data_size": 65536 00:13:00.325 }, 00:13:00.325 { 00:13:00.325 "name": "BaseBdev4", 00:13:00.325 "uuid": "ca2f31fd-43e5-45f2-bc23-1b7173d95b1e", 00:13:00.325 "is_configured": true, 00:13:00.325 "data_offset": 0, 00:13:00.325 "data_size": 65536 00:13:00.325 } 00:13:00.325 ] 00:13:00.325 }' 00:13:00.325 04:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.325 04:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.584 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.584 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.584 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.584 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:00.584 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.584 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:00.584 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:00.584 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.584 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.844 [2024-11-27 04:29:57.202170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:00.844 BaseBdev1 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.844 [ 00:13:00.844 { 00:13:00.844 "name": "BaseBdev1", 00:13:00.844 "aliases": [ 00:13:00.844 "4cec32cd-c418-4ad1-876a-ef1114ed8380" 00:13:00.844 ], 00:13:00.844 "product_name": "Malloc disk", 00:13:00.844 "block_size": 512, 00:13:00.844 "num_blocks": 65536, 00:13:00.844 "uuid": "4cec32cd-c418-4ad1-876a-ef1114ed8380", 00:13:00.844 "assigned_rate_limits": { 00:13:00.844 "rw_ios_per_sec": 0, 00:13:00.844 "rw_mbytes_per_sec": 0, 00:13:00.844 "r_mbytes_per_sec": 0, 00:13:00.844 "w_mbytes_per_sec": 0 00:13:00.844 }, 00:13:00.844 "claimed": true, 00:13:00.844 "claim_type": "exclusive_write", 00:13:00.844 "zoned": false, 00:13:00.844 "supported_io_types": { 00:13:00.844 "read": true, 00:13:00.844 "write": true, 00:13:00.844 "unmap": true, 00:13:00.844 "flush": true, 00:13:00.844 "reset": true, 00:13:00.844 "nvme_admin": false, 00:13:00.844 "nvme_io": false, 00:13:00.844 "nvme_io_md": false, 00:13:00.844 "write_zeroes": true, 00:13:00.844 "zcopy": true, 00:13:00.844 "get_zone_info": false, 00:13:00.844 "zone_management": false, 00:13:00.844 "zone_append": false, 00:13:00.844 "compare": false, 00:13:00.844 "compare_and_write": false, 00:13:00.844 "abort": true, 00:13:00.844 "seek_hole": false, 00:13:00.844 "seek_data": false, 00:13:00.844 "copy": true, 00:13:00.844 "nvme_iov_md": false 00:13:00.844 }, 00:13:00.844 "memory_domains": [ 00:13:00.844 { 00:13:00.844 "dma_device_id": "system", 00:13:00.844 "dma_device_type": 1 00:13:00.844 }, 00:13:00.844 { 00:13:00.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.844 "dma_device_type": 2 00:13:00.844 } 00:13:00.844 ], 00:13:00.844 "driver_specific": {} 00:13:00.844 } 00:13:00.844 ] 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.844 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.844 "name": "Existed_Raid", 00:13:00.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.844 "strip_size_kb": 64, 00:13:00.844 "state": "configuring", 00:13:00.844 "raid_level": "concat", 00:13:00.844 "superblock": false, 00:13:00.844 "num_base_bdevs": 4, 00:13:00.844 "num_base_bdevs_discovered": 3, 00:13:00.844 "num_base_bdevs_operational": 4, 00:13:00.844 "base_bdevs_list": [ 00:13:00.844 { 00:13:00.844 "name": "BaseBdev1", 00:13:00.844 "uuid": "4cec32cd-c418-4ad1-876a-ef1114ed8380", 00:13:00.844 "is_configured": true, 00:13:00.844 "data_offset": 0, 00:13:00.844 "data_size": 65536 00:13:00.844 }, 00:13:00.844 { 00:13:00.845 "name": null, 00:13:00.845 "uuid": "7d3a72eb-fc37-41b3-8f8b-e13c262c16c6", 00:13:00.845 "is_configured": false, 00:13:00.845 "data_offset": 0, 00:13:00.845 "data_size": 65536 00:13:00.845 }, 00:13:00.845 { 00:13:00.845 "name": "BaseBdev3", 00:13:00.845 "uuid": "c4c7e60b-b9f1-4ba8-b94f-e92f3ff8aa0b", 00:13:00.845 "is_configured": true, 00:13:00.845 "data_offset": 0, 00:13:00.845 "data_size": 65536 00:13:00.845 }, 00:13:00.845 { 00:13:00.845 "name": "BaseBdev4", 00:13:00.845 "uuid": "ca2f31fd-43e5-45f2-bc23-1b7173d95b1e", 00:13:00.845 "is_configured": true, 00:13:00.845 "data_offset": 0, 00:13:00.845 "data_size": 65536 00:13:00.845 } 00:13:00.845 ] 00:13:00.845 }' 00:13:00.845 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.845 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.105 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:01.105 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.105 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.105 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.364 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.364 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:01.364 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:01.364 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.364 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.364 [2024-11-27 04:29:57.709427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:01.364 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.364 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:01.364 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.364 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.364 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:01.364 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.364 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:01.364 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.364 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.364 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.364 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.364 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.364 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.364 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.364 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.364 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.364 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.364 "name": "Existed_Raid", 00:13:01.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.364 "strip_size_kb": 64, 00:13:01.364 "state": "configuring", 00:13:01.364 "raid_level": "concat", 00:13:01.364 "superblock": false, 00:13:01.364 "num_base_bdevs": 4, 00:13:01.364 "num_base_bdevs_discovered": 2, 00:13:01.364 "num_base_bdevs_operational": 4, 00:13:01.364 "base_bdevs_list": [ 00:13:01.364 { 00:13:01.364 "name": "BaseBdev1", 00:13:01.364 "uuid": "4cec32cd-c418-4ad1-876a-ef1114ed8380", 00:13:01.364 "is_configured": true, 00:13:01.364 "data_offset": 0, 00:13:01.364 "data_size": 65536 00:13:01.364 }, 00:13:01.364 { 00:13:01.364 "name": null, 00:13:01.364 "uuid": "7d3a72eb-fc37-41b3-8f8b-e13c262c16c6", 00:13:01.364 "is_configured": false, 00:13:01.364 "data_offset": 0, 00:13:01.364 "data_size": 65536 00:13:01.364 }, 00:13:01.364 { 00:13:01.364 "name": null, 00:13:01.364 "uuid": "c4c7e60b-b9f1-4ba8-b94f-e92f3ff8aa0b", 00:13:01.364 "is_configured": false, 00:13:01.364 "data_offset": 0, 00:13:01.364 "data_size": 65536 00:13:01.364 }, 00:13:01.364 { 00:13:01.364 "name": "BaseBdev4", 00:13:01.364 "uuid": "ca2f31fd-43e5-45f2-bc23-1b7173d95b1e", 00:13:01.364 "is_configured": true, 00:13:01.364 "data_offset": 0, 00:13:01.364 "data_size": 65536 00:13:01.364 } 00:13:01.364 ] 00:13:01.364 }' 00:13:01.364 04:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.364 04:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.624 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.624 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:01.624 04:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.624 04:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.624 04:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.624 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:01.624 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:01.624 04:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.624 04:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.624 [2024-11-27 04:29:58.136716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:01.624 04:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.624 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:01.624 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.624 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.624 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:01.624 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.624 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:01.624 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.624 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.624 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.624 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.624 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.624 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.624 04:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.624 04:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.624 04:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.624 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.624 "name": "Existed_Raid", 00:13:01.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.625 "strip_size_kb": 64, 00:13:01.625 "state": "configuring", 00:13:01.625 "raid_level": "concat", 00:13:01.625 "superblock": false, 00:13:01.625 "num_base_bdevs": 4, 00:13:01.625 "num_base_bdevs_discovered": 3, 00:13:01.625 "num_base_bdevs_operational": 4, 00:13:01.625 "base_bdevs_list": [ 00:13:01.625 { 00:13:01.625 "name": "BaseBdev1", 00:13:01.625 "uuid": "4cec32cd-c418-4ad1-876a-ef1114ed8380", 00:13:01.625 "is_configured": true, 00:13:01.625 "data_offset": 0, 00:13:01.625 "data_size": 65536 00:13:01.625 }, 00:13:01.625 { 00:13:01.625 "name": null, 00:13:01.625 "uuid": "7d3a72eb-fc37-41b3-8f8b-e13c262c16c6", 00:13:01.625 "is_configured": false, 00:13:01.625 "data_offset": 0, 00:13:01.625 "data_size": 65536 00:13:01.625 }, 00:13:01.625 { 00:13:01.625 "name": "BaseBdev3", 00:13:01.625 "uuid": "c4c7e60b-b9f1-4ba8-b94f-e92f3ff8aa0b", 00:13:01.625 "is_configured": true, 00:13:01.625 "data_offset": 0, 00:13:01.625 "data_size": 65536 00:13:01.625 }, 00:13:01.625 { 00:13:01.625 "name": "BaseBdev4", 00:13:01.625 "uuid": "ca2f31fd-43e5-45f2-bc23-1b7173d95b1e", 00:13:01.625 "is_configured": true, 00:13:01.625 "data_offset": 0, 00:13:01.625 "data_size": 65536 00:13:01.625 } 00:13:01.625 ] 00:13:01.625 }' 00:13:01.625 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.625 04:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.195 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.195 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:02.195 04:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.195 04:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.195 04:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.195 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:02.195 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:02.195 04:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.195 04:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.195 [2024-11-27 04:29:58.663877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:02.195 04:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.195 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:02.195 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.195 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:02.195 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:02.195 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.195 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:02.195 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.195 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.195 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.195 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.195 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.195 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.195 04:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.456 04:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.456 04:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.456 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.456 "name": "Existed_Raid", 00:13:02.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.456 "strip_size_kb": 64, 00:13:02.456 "state": "configuring", 00:13:02.456 "raid_level": "concat", 00:13:02.456 "superblock": false, 00:13:02.456 "num_base_bdevs": 4, 00:13:02.456 "num_base_bdevs_discovered": 2, 00:13:02.456 "num_base_bdevs_operational": 4, 00:13:02.456 "base_bdevs_list": [ 00:13:02.456 { 00:13:02.456 "name": null, 00:13:02.456 "uuid": "4cec32cd-c418-4ad1-876a-ef1114ed8380", 00:13:02.456 "is_configured": false, 00:13:02.456 "data_offset": 0, 00:13:02.456 "data_size": 65536 00:13:02.456 }, 00:13:02.456 { 00:13:02.456 "name": null, 00:13:02.456 "uuid": "7d3a72eb-fc37-41b3-8f8b-e13c262c16c6", 00:13:02.456 "is_configured": false, 00:13:02.456 "data_offset": 0, 00:13:02.456 "data_size": 65536 00:13:02.456 }, 00:13:02.456 { 00:13:02.456 "name": "BaseBdev3", 00:13:02.456 "uuid": "c4c7e60b-b9f1-4ba8-b94f-e92f3ff8aa0b", 00:13:02.456 "is_configured": true, 00:13:02.456 "data_offset": 0, 00:13:02.456 "data_size": 65536 00:13:02.456 }, 00:13:02.456 { 00:13:02.456 "name": "BaseBdev4", 00:13:02.456 "uuid": "ca2f31fd-43e5-45f2-bc23-1b7173d95b1e", 00:13:02.456 "is_configured": true, 00:13:02.456 "data_offset": 0, 00:13:02.456 "data_size": 65536 00:13:02.456 } 00:13:02.456 ] 00:13:02.456 }' 00:13:02.456 04:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.456 04:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.717 [2024-11-27 04:29:59.228512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.717 "name": "Existed_Raid", 00:13:02.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.717 "strip_size_kb": 64, 00:13:02.717 "state": "configuring", 00:13:02.717 "raid_level": "concat", 00:13:02.717 "superblock": false, 00:13:02.717 "num_base_bdevs": 4, 00:13:02.717 "num_base_bdevs_discovered": 3, 00:13:02.717 "num_base_bdevs_operational": 4, 00:13:02.717 "base_bdevs_list": [ 00:13:02.717 { 00:13:02.717 "name": null, 00:13:02.717 "uuid": "4cec32cd-c418-4ad1-876a-ef1114ed8380", 00:13:02.717 "is_configured": false, 00:13:02.717 "data_offset": 0, 00:13:02.717 "data_size": 65536 00:13:02.717 }, 00:13:02.717 { 00:13:02.717 "name": "BaseBdev2", 00:13:02.717 "uuid": "7d3a72eb-fc37-41b3-8f8b-e13c262c16c6", 00:13:02.717 "is_configured": true, 00:13:02.717 "data_offset": 0, 00:13:02.717 "data_size": 65536 00:13:02.717 }, 00:13:02.717 { 00:13:02.717 "name": "BaseBdev3", 00:13:02.717 "uuid": "c4c7e60b-b9f1-4ba8-b94f-e92f3ff8aa0b", 00:13:02.717 "is_configured": true, 00:13:02.717 "data_offset": 0, 00:13:02.717 "data_size": 65536 00:13:02.717 }, 00:13:02.717 { 00:13:02.717 "name": "BaseBdev4", 00:13:02.717 "uuid": "ca2f31fd-43e5-45f2-bc23-1b7173d95b1e", 00:13:02.717 "is_configured": true, 00:13:02.717 "data_offset": 0, 00:13:02.717 "data_size": 65536 00:13:02.717 } 00:13:02.717 ] 00:13:02.717 }' 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.717 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4cec32cd-c418-4ad1-876a-ef1114ed8380 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.287 [2024-11-27 04:29:59.828296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:03.287 [2024-11-27 04:29:59.828487] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:03.287 [2024-11-27 04:29:59.828513] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:03.287 [2024-11-27 04:29:59.828866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:03.287 [2024-11-27 04:29:59.829113] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:03.287 [2024-11-27 04:29:59.829161] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:03.287 [2024-11-27 04:29:59.829505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.287 NewBaseBdev 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.287 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.287 [ 00:13:03.287 { 00:13:03.287 "name": "NewBaseBdev", 00:13:03.287 "aliases": [ 00:13:03.287 "4cec32cd-c418-4ad1-876a-ef1114ed8380" 00:13:03.287 ], 00:13:03.287 "product_name": "Malloc disk", 00:13:03.287 "block_size": 512, 00:13:03.287 "num_blocks": 65536, 00:13:03.287 "uuid": "4cec32cd-c418-4ad1-876a-ef1114ed8380", 00:13:03.287 "assigned_rate_limits": { 00:13:03.287 "rw_ios_per_sec": 0, 00:13:03.287 "rw_mbytes_per_sec": 0, 00:13:03.287 "r_mbytes_per_sec": 0, 00:13:03.287 "w_mbytes_per_sec": 0 00:13:03.287 }, 00:13:03.287 "claimed": true, 00:13:03.287 "claim_type": "exclusive_write", 00:13:03.287 "zoned": false, 00:13:03.287 "supported_io_types": { 00:13:03.287 "read": true, 00:13:03.287 "write": true, 00:13:03.287 "unmap": true, 00:13:03.287 "flush": true, 00:13:03.287 "reset": true, 00:13:03.287 "nvme_admin": false, 00:13:03.287 "nvme_io": false, 00:13:03.287 "nvme_io_md": false, 00:13:03.287 "write_zeroes": true, 00:13:03.287 "zcopy": true, 00:13:03.287 "get_zone_info": false, 00:13:03.287 "zone_management": false, 00:13:03.287 "zone_append": false, 00:13:03.287 "compare": false, 00:13:03.287 "compare_and_write": false, 00:13:03.287 "abort": true, 00:13:03.287 "seek_hole": false, 00:13:03.287 "seek_data": false, 00:13:03.287 "copy": true, 00:13:03.287 "nvme_iov_md": false 00:13:03.287 }, 00:13:03.287 "memory_domains": [ 00:13:03.287 { 00:13:03.287 "dma_device_id": "system", 00:13:03.287 "dma_device_type": 1 00:13:03.287 }, 00:13:03.287 { 00:13:03.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.288 "dma_device_type": 2 00:13:03.288 } 00:13:03.288 ], 00:13:03.288 "driver_specific": {} 00:13:03.288 } 00:13:03.288 ] 00:13:03.288 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.288 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:03.288 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:03.288 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.288 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.548 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:03.548 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.548 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:03.548 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.548 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.548 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.548 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.548 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.548 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.548 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.548 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.548 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.548 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.548 "name": "Existed_Raid", 00:13:03.548 "uuid": "5388fe1e-c03f-4f99-81f2-cc3beede4d4d", 00:13:03.548 "strip_size_kb": 64, 00:13:03.548 "state": "online", 00:13:03.548 "raid_level": "concat", 00:13:03.548 "superblock": false, 00:13:03.548 "num_base_bdevs": 4, 00:13:03.548 "num_base_bdevs_discovered": 4, 00:13:03.548 "num_base_bdevs_operational": 4, 00:13:03.548 "base_bdevs_list": [ 00:13:03.548 { 00:13:03.548 "name": "NewBaseBdev", 00:13:03.548 "uuid": "4cec32cd-c418-4ad1-876a-ef1114ed8380", 00:13:03.548 "is_configured": true, 00:13:03.548 "data_offset": 0, 00:13:03.548 "data_size": 65536 00:13:03.548 }, 00:13:03.548 { 00:13:03.548 "name": "BaseBdev2", 00:13:03.548 "uuid": "7d3a72eb-fc37-41b3-8f8b-e13c262c16c6", 00:13:03.548 "is_configured": true, 00:13:03.548 "data_offset": 0, 00:13:03.548 "data_size": 65536 00:13:03.548 }, 00:13:03.548 { 00:13:03.548 "name": "BaseBdev3", 00:13:03.548 "uuid": "c4c7e60b-b9f1-4ba8-b94f-e92f3ff8aa0b", 00:13:03.548 "is_configured": true, 00:13:03.548 "data_offset": 0, 00:13:03.548 "data_size": 65536 00:13:03.548 }, 00:13:03.548 { 00:13:03.548 "name": "BaseBdev4", 00:13:03.548 "uuid": "ca2f31fd-43e5-45f2-bc23-1b7173d95b1e", 00:13:03.548 "is_configured": true, 00:13:03.548 "data_offset": 0, 00:13:03.548 "data_size": 65536 00:13:03.548 } 00:13:03.548 ] 00:13:03.548 }' 00:13:03.548 04:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.548 04:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.879 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:03.879 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:03.879 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:03.879 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:03.879 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:03.879 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:03.879 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:03.879 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.879 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.879 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:03.879 [2024-11-27 04:30:00.276141] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:03.879 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.879 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:03.879 "name": "Existed_Raid", 00:13:03.879 "aliases": [ 00:13:03.879 "5388fe1e-c03f-4f99-81f2-cc3beede4d4d" 00:13:03.879 ], 00:13:03.879 "product_name": "Raid Volume", 00:13:03.879 "block_size": 512, 00:13:03.879 "num_blocks": 262144, 00:13:03.879 "uuid": "5388fe1e-c03f-4f99-81f2-cc3beede4d4d", 00:13:03.879 "assigned_rate_limits": { 00:13:03.879 "rw_ios_per_sec": 0, 00:13:03.879 "rw_mbytes_per_sec": 0, 00:13:03.879 "r_mbytes_per_sec": 0, 00:13:03.879 "w_mbytes_per_sec": 0 00:13:03.879 }, 00:13:03.879 "claimed": false, 00:13:03.879 "zoned": false, 00:13:03.879 "supported_io_types": { 00:13:03.879 "read": true, 00:13:03.879 "write": true, 00:13:03.879 "unmap": true, 00:13:03.879 "flush": true, 00:13:03.879 "reset": true, 00:13:03.879 "nvme_admin": false, 00:13:03.879 "nvme_io": false, 00:13:03.879 "nvme_io_md": false, 00:13:03.879 "write_zeroes": true, 00:13:03.879 "zcopy": false, 00:13:03.879 "get_zone_info": false, 00:13:03.879 "zone_management": false, 00:13:03.879 "zone_append": false, 00:13:03.879 "compare": false, 00:13:03.879 "compare_and_write": false, 00:13:03.879 "abort": false, 00:13:03.879 "seek_hole": false, 00:13:03.879 "seek_data": false, 00:13:03.879 "copy": false, 00:13:03.879 "nvme_iov_md": false 00:13:03.879 }, 00:13:03.879 "memory_domains": [ 00:13:03.879 { 00:13:03.879 "dma_device_id": "system", 00:13:03.879 "dma_device_type": 1 00:13:03.880 }, 00:13:03.880 { 00:13:03.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.880 "dma_device_type": 2 00:13:03.880 }, 00:13:03.880 { 00:13:03.880 "dma_device_id": "system", 00:13:03.880 "dma_device_type": 1 00:13:03.880 }, 00:13:03.880 { 00:13:03.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.880 "dma_device_type": 2 00:13:03.880 }, 00:13:03.880 { 00:13:03.880 "dma_device_id": "system", 00:13:03.880 "dma_device_type": 1 00:13:03.880 }, 00:13:03.880 { 00:13:03.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.880 "dma_device_type": 2 00:13:03.880 }, 00:13:03.880 { 00:13:03.880 "dma_device_id": "system", 00:13:03.880 "dma_device_type": 1 00:13:03.880 }, 00:13:03.880 { 00:13:03.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.880 "dma_device_type": 2 00:13:03.880 } 00:13:03.880 ], 00:13:03.880 "driver_specific": { 00:13:03.880 "raid": { 00:13:03.880 "uuid": "5388fe1e-c03f-4f99-81f2-cc3beede4d4d", 00:13:03.880 "strip_size_kb": 64, 00:13:03.880 "state": "online", 00:13:03.880 "raid_level": "concat", 00:13:03.880 "superblock": false, 00:13:03.880 "num_base_bdevs": 4, 00:13:03.880 "num_base_bdevs_discovered": 4, 00:13:03.880 "num_base_bdevs_operational": 4, 00:13:03.880 "base_bdevs_list": [ 00:13:03.880 { 00:13:03.880 "name": "NewBaseBdev", 00:13:03.880 "uuid": "4cec32cd-c418-4ad1-876a-ef1114ed8380", 00:13:03.880 "is_configured": true, 00:13:03.880 "data_offset": 0, 00:13:03.880 "data_size": 65536 00:13:03.880 }, 00:13:03.880 { 00:13:03.880 "name": "BaseBdev2", 00:13:03.880 "uuid": "7d3a72eb-fc37-41b3-8f8b-e13c262c16c6", 00:13:03.880 "is_configured": true, 00:13:03.880 "data_offset": 0, 00:13:03.880 "data_size": 65536 00:13:03.880 }, 00:13:03.880 { 00:13:03.880 "name": "BaseBdev3", 00:13:03.880 "uuid": "c4c7e60b-b9f1-4ba8-b94f-e92f3ff8aa0b", 00:13:03.880 "is_configured": true, 00:13:03.880 "data_offset": 0, 00:13:03.880 "data_size": 65536 00:13:03.880 }, 00:13:03.880 { 00:13:03.880 "name": "BaseBdev4", 00:13:03.880 "uuid": "ca2f31fd-43e5-45f2-bc23-1b7173d95b1e", 00:13:03.880 "is_configured": true, 00:13:03.880 "data_offset": 0, 00:13:03.880 "data_size": 65536 00:13:03.880 } 00:13:03.880 ] 00:13:03.880 } 00:13:03.880 } 00:13:03.880 }' 00:13:03.880 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:03.880 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:03.880 BaseBdev2 00:13:03.880 BaseBdev3 00:13:03.880 BaseBdev4' 00:13:03.880 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.880 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:03.880 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.880 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.880 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:03.880 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.880 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.880 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.880 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.880 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.880 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.880 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.880 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:03.880 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.880 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.880 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.140 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:04.140 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:04.140 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:04.140 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:04.140 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.140 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.140 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:04.140 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.140 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:04.140 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:04.141 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:04.141 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:04.141 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.141 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.141 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:04.141 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.141 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:04.141 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:04.141 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:04.141 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.141 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.141 [2024-11-27 04:30:00.563285] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:04.141 [2024-11-27 04:30:00.563350] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:04.141 [2024-11-27 04:30:00.563477] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:04.141 [2024-11-27 04:30:00.563568] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:04.141 [2024-11-27 04:30:00.563582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:04.141 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.141 04:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71566 00:13:04.141 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71566 ']' 00:13:04.141 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71566 00:13:04.141 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:04.141 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:04.141 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71566 00:13:04.141 killing process with pid 71566 00:13:04.141 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:04.141 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:04.141 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71566' 00:13:04.141 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71566 00:13:04.141 04:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71566 00:13:04.141 [2024-11-27 04:30:00.605615] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:04.711 [2024-11-27 04:30:01.104928] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:06.089 00:13:06.089 real 0m11.966s 00:13:06.089 user 0m18.576s 00:13:06.089 sys 0m2.222s 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.089 ************************************ 00:13:06.089 END TEST raid_state_function_test 00:13:06.089 ************************************ 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.089 04:30:02 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:13:06.089 04:30:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:06.089 04:30:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.089 04:30:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:06.089 ************************************ 00:13:06.089 START TEST raid_state_function_test_sb 00:13:06.089 ************************************ 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72236 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72236' 00:13:06.089 Process raid pid: 72236 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72236 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72236 ']' 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.089 04:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.089 [2024-11-27 04:30:02.592546] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:13:06.089 [2024-11-27 04:30:02.592777] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.349 [2024-11-27 04:30:02.751463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.349 [2024-11-27 04:30:02.896029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.607 [2024-11-27 04:30:03.149549] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:06.607 [2024-11-27 04:30:03.149723] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:06.866 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:06.866 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:06.866 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:06.866 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.866 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.866 [2024-11-27 04:30:03.431186] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:06.866 [2024-11-27 04:30:03.431258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:06.866 [2024-11-27 04:30:03.431268] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:06.866 [2024-11-27 04:30:03.431279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:06.866 [2024-11-27 04:30:03.431292] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:06.866 [2024-11-27 04:30:03.431301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:06.866 [2024-11-27 04:30:03.431307] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:06.866 [2024-11-27 04:30:03.431317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:06.866 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.866 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:06.866 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.866 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.866 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:06.866 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.866 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.867 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.867 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.867 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.867 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.867 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.867 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.867 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.867 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.125 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.125 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.125 "name": "Existed_Raid", 00:13:07.125 "uuid": "cb654c3f-6385-449e-b68e-c617913c3bf6", 00:13:07.125 "strip_size_kb": 64, 00:13:07.125 "state": "configuring", 00:13:07.125 "raid_level": "concat", 00:13:07.125 "superblock": true, 00:13:07.125 "num_base_bdevs": 4, 00:13:07.125 "num_base_bdevs_discovered": 0, 00:13:07.125 "num_base_bdevs_operational": 4, 00:13:07.125 "base_bdevs_list": [ 00:13:07.125 { 00:13:07.125 "name": "BaseBdev1", 00:13:07.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.125 "is_configured": false, 00:13:07.125 "data_offset": 0, 00:13:07.125 "data_size": 0 00:13:07.125 }, 00:13:07.125 { 00:13:07.125 "name": "BaseBdev2", 00:13:07.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.125 "is_configured": false, 00:13:07.125 "data_offset": 0, 00:13:07.125 "data_size": 0 00:13:07.125 }, 00:13:07.125 { 00:13:07.125 "name": "BaseBdev3", 00:13:07.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.125 "is_configured": false, 00:13:07.125 "data_offset": 0, 00:13:07.125 "data_size": 0 00:13:07.125 }, 00:13:07.125 { 00:13:07.125 "name": "BaseBdev4", 00:13:07.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.125 "is_configured": false, 00:13:07.125 "data_offset": 0, 00:13:07.125 "data_size": 0 00:13:07.125 } 00:13:07.125 ] 00:13:07.125 }' 00:13:07.125 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.125 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.384 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:07.384 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.384 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.384 [2024-11-27 04:30:03.858424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:07.384 [2024-11-27 04:30:03.858595] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:07.384 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.384 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:07.384 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.384 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.384 [2024-11-27 04:30:03.870373] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:07.384 [2024-11-27 04:30:03.870485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:07.384 [2024-11-27 04:30:03.870526] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:07.384 [2024-11-27 04:30:03.870554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:07.384 [2024-11-27 04:30:03.870575] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:07.384 [2024-11-27 04:30:03.870599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:07.384 [2024-11-27 04:30:03.870620] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:07.384 [2024-11-27 04:30:03.870661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:07.384 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.384 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:07.384 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.384 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.384 [2024-11-27 04:30:03.924128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:07.384 BaseBdev1 00:13:07.384 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.384 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:07.384 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:07.384 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:07.384 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:07.384 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:07.384 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:07.384 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:07.384 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.384 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.384 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.384 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:07.384 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.384 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.384 [ 00:13:07.384 { 00:13:07.384 "name": "BaseBdev1", 00:13:07.384 "aliases": [ 00:13:07.384 "02f1a726-94b7-4166-be54-bf4c84c34834" 00:13:07.384 ], 00:13:07.384 "product_name": "Malloc disk", 00:13:07.384 "block_size": 512, 00:13:07.384 "num_blocks": 65536, 00:13:07.384 "uuid": "02f1a726-94b7-4166-be54-bf4c84c34834", 00:13:07.384 "assigned_rate_limits": { 00:13:07.384 "rw_ios_per_sec": 0, 00:13:07.384 "rw_mbytes_per_sec": 0, 00:13:07.384 "r_mbytes_per_sec": 0, 00:13:07.384 "w_mbytes_per_sec": 0 00:13:07.384 }, 00:13:07.384 "claimed": true, 00:13:07.384 "claim_type": "exclusive_write", 00:13:07.384 "zoned": false, 00:13:07.384 "supported_io_types": { 00:13:07.385 "read": true, 00:13:07.385 "write": true, 00:13:07.385 "unmap": true, 00:13:07.385 "flush": true, 00:13:07.385 "reset": true, 00:13:07.385 "nvme_admin": false, 00:13:07.385 "nvme_io": false, 00:13:07.385 "nvme_io_md": false, 00:13:07.385 "write_zeroes": true, 00:13:07.385 "zcopy": true, 00:13:07.385 "get_zone_info": false, 00:13:07.385 "zone_management": false, 00:13:07.385 "zone_append": false, 00:13:07.385 "compare": false, 00:13:07.385 "compare_and_write": false, 00:13:07.385 "abort": true, 00:13:07.385 "seek_hole": false, 00:13:07.385 "seek_data": false, 00:13:07.385 "copy": true, 00:13:07.385 "nvme_iov_md": false 00:13:07.385 }, 00:13:07.385 "memory_domains": [ 00:13:07.385 { 00:13:07.385 "dma_device_id": "system", 00:13:07.385 "dma_device_type": 1 00:13:07.385 }, 00:13:07.385 { 00:13:07.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.385 "dma_device_type": 2 00:13:07.385 } 00:13:07.385 ], 00:13:07.385 "driver_specific": {} 00:13:07.385 } 00:13:07.385 ] 00:13:07.385 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.385 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:07.385 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:07.385 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.385 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.385 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:07.385 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.385 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:07.385 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.385 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.385 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.385 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.385 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.385 04:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.385 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.385 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.643 04:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.643 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.643 "name": "Existed_Raid", 00:13:07.643 "uuid": "f44c6bdc-f404-4528-876f-8d9fd6d6dc66", 00:13:07.643 "strip_size_kb": 64, 00:13:07.643 "state": "configuring", 00:13:07.643 "raid_level": "concat", 00:13:07.643 "superblock": true, 00:13:07.643 "num_base_bdevs": 4, 00:13:07.643 "num_base_bdevs_discovered": 1, 00:13:07.643 "num_base_bdevs_operational": 4, 00:13:07.643 "base_bdevs_list": [ 00:13:07.643 { 00:13:07.643 "name": "BaseBdev1", 00:13:07.643 "uuid": "02f1a726-94b7-4166-be54-bf4c84c34834", 00:13:07.643 "is_configured": true, 00:13:07.643 "data_offset": 2048, 00:13:07.643 "data_size": 63488 00:13:07.643 }, 00:13:07.643 { 00:13:07.643 "name": "BaseBdev2", 00:13:07.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.643 "is_configured": false, 00:13:07.643 "data_offset": 0, 00:13:07.643 "data_size": 0 00:13:07.643 }, 00:13:07.643 { 00:13:07.643 "name": "BaseBdev3", 00:13:07.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.643 "is_configured": false, 00:13:07.643 "data_offset": 0, 00:13:07.643 "data_size": 0 00:13:07.643 }, 00:13:07.643 { 00:13:07.643 "name": "BaseBdev4", 00:13:07.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.643 "is_configured": false, 00:13:07.643 "data_offset": 0, 00:13:07.643 "data_size": 0 00:13:07.643 } 00:13:07.643 ] 00:13:07.643 }' 00:13:07.643 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.643 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.901 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:07.901 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.901 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.901 [2024-11-27 04:30:04.391605] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:07.901 [2024-11-27 04:30:04.391808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:07.901 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.901 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:07.901 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.901 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.901 [2024-11-27 04:30:04.403632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:07.901 [2024-11-27 04:30:04.405985] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:07.901 [2024-11-27 04:30:04.406039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:07.902 [2024-11-27 04:30:04.406052] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:07.902 [2024-11-27 04:30:04.406064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:07.902 [2024-11-27 04:30:04.406072] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:07.902 [2024-11-27 04:30:04.406094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:07.902 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.902 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:07.902 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:07.902 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:07.902 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.902 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.902 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:07.902 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.902 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:07.902 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.902 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.902 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.902 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.902 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.902 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.902 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.902 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.902 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.902 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.902 "name": "Existed_Raid", 00:13:07.902 "uuid": "bccb64af-415f-495c-8020-b4b48ef322f7", 00:13:07.902 "strip_size_kb": 64, 00:13:07.902 "state": "configuring", 00:13:07.902 "raid_level": "concat", 00:13:07.902 "superblock": true, 00:13:07.902 "num_base_bdevs": 4, 00:13:07.902 "num_base_bdevs_discovered": 1, 00:13:07.902 "num_base_bdevs_operational": 4, 00:13:07.902 "base_bdevs_list": [ 00:13:07.902 { 00:13:07.902 "name": "BaseBdev1", 00:13:07.902 "uuid": "02f1a726-94b7-4166-be54-bf4c84c34834", 00:13:07.902 "is_configured": true, 00:13:07.902 "data_offset": 2048, 00:13:07.902 "data_size": 63488 00:13:07.902 }, 00:13:07.902 { 00:13:07.902 "name": "BaseBdev2", 00:13:07.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.902 "is_configured": false, 00:13:07.902 "data_offset": 0, 00:13:07.902 "data_size": 0 00:13:07.902 }, 00:13:07.902 { 00:13:07.902 "name": "BaseBdev3", 00:13:07.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.902 "is_configured": false, 00:13:07.902 "data_offset": 0, 00:13:07.902 "data_size": 0 00:13:07.902 }, 00:13:07.902 { 00:13:07.902 "name": "BaseBdev4", 00:13:07.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.902 "is_configured": false, 00:13:07.902 "data_offset": 0, 00:13:07.902 "data_size": 0 00:13:07.902 } 00:13:07.902 ] 00:13:07.902 }' 00:13:07.902 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.902 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.512 [2024-11-27 04:30:04.922927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:08.512 BaseBdev2 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.512 [ 00:13:08.512 { 00:13:08.512 "name": "BaseBdev2", 00:13:08.512 "aliases": [ 00:13:08.512 "ad7ced62-0463-4e90-b751-d60ba709964d" 00:13:08.512 ], 00:13:08.512 "product_name": "Malloc disk", 00:13:08.512 "block_size": 512, 00:13:08.512 "num_blocks": 65536, 00:13:08.512 "uuid": "ad7ced62-0463-4e90-b751-d60ba709964d", 00:13:08.512 "assigned_rate_limits": { 00:13:08.512 "rw_ios_per_sec": 0, 00:13:08.512 "rw_mbytes_per_sec": 0, 00:13:08.512 "r_mbytes_per_sec": 0, 00:13:08.512 "w_mbytes_per_sec": 0 00:13:08.512 }, 00:13:08.512 "claimed": true, 00:13:08.512 "claim_type": "exclusive_write", 00:13:08.512 "zoned": false, 00:13:08.512 "supported_io_types": { 00:13:08.512 "read": true, 00:13:08.512 "write": true, 00:13:08.512 "unmap": true, 00:13:08.512 "flush": true, 00:13:08.512 "reset": true, 00:13:08.512 "nvme_admin": false, 00:13:08.512 "nvme_io": false, 00:13:08.512 "nvme_io_md": false, 00:13:08.512 "write_zeroes": true, 00:13:08.512 "zcopy": true, 00:13:08.512 "get_zone_info": false, 00:13:08.512 "zone_management": false, 00:13:08.512 "zone_append": false, 00:13:08.512 "compare": false, 00:13:08.512 "compare_and_write": false, 00:13:08.512 "abort": true, 00:13:08.512 "seek_hole": false, 00:13:08.512 "seek_data": false, 00:13:08.512 "copy": true, 00:13:08.512 "nvme_iov_md": false 00:13:08.512 }, 00:13:08.512 "memory_domains": [ 00:13:08.512 { 00:13:08.512 "dma_device_id": "system", 00:13:08.512 "dma_device_type": 1 00:13:08.512 }, 00:13:08.512 { 00:13:08.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.512 "dma_device_type": 2 00:13:08.512 } 00:13:08.512 ], 00:13:08.512 "driver_specific": {} 00:13:08.512 } 00:13:08.512 ] 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:08.512 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.513 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.513 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.513 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.513 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.513 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.513 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.513 04:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.513 04:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.513 04:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.513 "name": "Existed_Raid", 00:13:08.513 "uuid": "bccb64af-415f-495c-8020-b4b48ef322f7", 00:13:08.513 "strip_size_kb": 64, 00:13:08.513 "state": "configuring", 00:13:08.513 "raid_level": "concat", 00:13:08.513 "superblock": true, 00:13:08.513 "num_base_bdevs": 4, 00:13:08.513 "num_base_bdevs_discovered": 2, 00:13:08.513 "num_base_bdevs_operational": 4, 00:13:08.513 "base_bdevs_list": [ 00:13:08.513 { 00:13:08.513 "name": "BaseBdev1", 00:13:08.513 "uuid": "02f1a726-94b7-4166-be54-bf4c84c34834", 00:13:08.513 "is_configured": true, 00:13:08.513 "data_offset": 2048, 00:13:08.513 "data_size": 63488 00:13:08.513 }, 00:13:08.513 { 00:13:08.513 "name": "BaseBdev2", 00:13:08.513 "uuid": "ad7ced62-0463-4e90-b751-d60ba709964d", 00:13:08.513 "is_configured": true, 00:13:08.513 "data_offset": 2048, 00:13:08.513 "data_size": 63488 00:13:08.513 }, 00:13:08.513 { 00:13:08.513 "name": "BaseBdev3", 00:13:08.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.513 "is_configured": false, 00:13:08.513 "data_offset": 0, 00:13:08.513 "data_size": 0 00:13:08.513 }, 00:13:08.513 { 00:13:08.513 "name": "BaseBdev4", 00:13:08.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.513 "is_configured": false, 00:13:08.513 "data_offset": 0, 00:13:08.513 "data_size": 0 00:13:08.513 } 00:13:08.513 ] 00:13:08.513 }' 00:13:08.513 04:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.513 04:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.080 [2024-11-27 04:30:05.461196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:09.080 BaseBdev3 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.080 [ 00:13:09.080 { 00:13:09.080 "name": "BaseBdev3", 00:13:09.080 "aliases": [ 00:13:09.080 "4eea114a-4f4e-464f-abc3-797bba6ae1a3" 00:13:09.080 ], 00:13:09.080 "product_name": "Malloc disk", 00:13:09.080 "block_size": 512, 00:13:09.080 "num_blocks": 65536, 00:13:09.080 "uuid": "4eea114a-4f4e-464f-abc3-797bba6ae1a3", 00:13:09.080 "assigned_rate_limits": { 00:13:09.080 "rw_ios_per_sec": 0, 00:13:09.080 "rw_mbytes_per_sec": 0, 00:13:09.080 "r_mbytes_per_sec": 0, 00:13:09.080 "w_mbytes_per_sec": 0 00:13:09.080 }, 00:13:09.080 "claimed": true, 00:13:09.080 "claim_type": "exclusive_write", 00:13:09.080 "zoned": false, 00:13:09.080 "supported_io_types": { 00:13:09.080 "read": true, 00:13:09.080 "write": true, 00:13:09.080 "unmap": true, 00:13:09.080 "flush": true, 00:13:09.080 "reset": true, 00:13:09.080 "nvme_admin": false, 00:13:09.080 "nvme_io": false, 00:13:09.080 "nvme_io_md": false, 00:13:09.080 "write_zeroes": true, 00:13:09.080 "zcopy": true, 00:13:09.080 "get_zone_info": false, 00:13:09.080 "zone_management": false, 00:13:09.080 "zone_append": false, 00:13:09.080 "compare": false, 00:13:09.080 "compare_and_write": false, 00:13:09.080 "abort": true, 00:13:09.080 "seek_hole": false, 00:13:09.080 "seek_data": false, 00:13:09.080 "copy": true, 00:13:09.080 "nvme_iov_md": false 00:13:09.080 }, 00:13:09.080 "memory_domains": [ 00:13:09.080 { 00:13:09.080 "dma_device_id": "system", 00:13:09.080 "dma_device_type": 1 00:13:09.080 }, 00:13:09.080 { 00:13:09.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.080 "dma_device_type": 2 00:13:09.080 } 00:13:09.080 ], 00:13:09.080 "driver_specific": {} 00:13:09.080 } 00:13:09.080 ] 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.080 "name": "Existed_Raid", 00:13:09.080 "uuid": "bccb64af-415f-495c-8020-b4b48ef322f7", 00:13:09.080 "strip_size_kb": 64, 00:13:09.080 "state": "configuring", 00:13:09.080 "raid_level": "concat", 00:13:09.080 "superblock": true, 00:13:09.080 "num_base_bdevs": 4, 00:13:09.080 "num_base_bdevs_discovered": 3, 00:13:09.080 "num_base_bdevs_operational": 4, 00:13:09.080 "base_bdevs_list": [ 00:13:09.080 { 00:13:09.080 "name": "BaseBdev1", 00:13:09.080 "uuid": "02f1a726-94b7-4166-be54-bf4c84c34834", 00:13:09.080 "is_configured": true, 00:13:09.080 "data_offset": 2048, 00:13:09.080 "data_size": 63488 00:13:09.080 }, 00:13:09.080 { 00:13:09.080 "name": "BaseBdev2", 00:13:09.080 "uuid": "ad7ced62-0463-4e90-b751-d60ba709964d", 00:13:09.080 "is_configured": true, 00:13:09.080 "data_offset": 2048, 00:13:09.080 "data_size": 63488 00:13:09.080 }, 00:13:09.080 { 00:13:09.080 "name": "BaseBdev3", 00:13:09.080 "uuid": "4eea114a-4f4e-464f-abc3-797bba6ae1a3", 00:13:09.080 "is_configured": true, 00:13:09.080 "data_offset": 2048, 00:13:09.080 "data_size": 63488 00:13:09.080 }, 00:13:09.080 { 00:13:09.080 "name": "BaseBdev4", 00:13:09.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.080 "is_configured": false, 00:13:09.080 "data_offset": 0, 00:13:09.080 "data_size": 0 00:13:09.080 } 00:13:09.080 ] 00:13:09.080 }' 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.080 04:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.647 04:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:09.647 04:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.647 04:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.647 [2024-11-27 04:30:06.001787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:09.647 [2024-11-27 04:30:06.002124] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:09.647 [2024-11-27 04:30:06.002143] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:09.647 BaseBdev4 00:13:09.647 [2024-11-27 04:30:06.002460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:09.647 [2024-11-27 04:30:06.002632] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:09.647 [2024-11-27 04:30:06.002646] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:09.647 [2024-11-27 04:30:06.002791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.647 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.647 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:09.647 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:09.647 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:09.647 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:09.647 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:09.647 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.648 [ 00:13:09.648 { 00:13:09.648 "name": "BaseBdev4", 00:13:09.648 "aliases": [ 00:13:09.648 "28cbd8e1-d039-472c-973d-118fd90f5058" 00:13:09.648 ], 00:13:09.648 "product_name": "Malloc disk", 00:13:09.648 "block_size": 512, 00:13:09.648 "num_blocks": 65536, 00:13:09.648 "uuid": "28cbd8e1-d039-472c-973d-118fd90f5058", 00:13:09.648 "assigned_rate_limits": { 00:13:09.648 "rw_ios_per_sec": 0, 00:13:09.648 "rw_mbytes_per_sec": 0, 00:13:09.648 "r_mbytes_per_sec": 0, 00:13:09.648 "w_mbytes_per_sec": 0 00:13:09.648 }, 00:13:09.648 "claimed": true, 00:13:09.648 "claim_type": "exclusive_write", 00:13:09.648 "zoned": false, 00:13:09.648 "supported_io_types": { 00:13:09.648 "read": true, 00:13:09.648 "write": true, 00:13:09.648 "unmap": true, 00:13:09.648 "flush": true, 00:13:09.648 "reset": true, 00:13:09.648 "nvme_admin": false, 00:13:09.648 "nvme_io": false, 00:13:09.648 "nvme_io_md": false, 00:13:09.648 "write_zeroes": true, 00:13:09.648 "zcopy": true, 00:13:09.648 "get_zone_info": false, 00:13:09.648 "zone_management": false, 00:13:09.648 "zone_append": false, 00:13:09.648 "compare": false, 00:13:09.648 "compare_and_write": false, 00:13:09.648 "abort": true, 00:13:09.648 "seek_hole": false, 00:13:09.648 "seek_data": false, 00:13:09.648 "copy": true, 00:13:09.648 "nvme_iov_md": false 00:13:09.648 }, 00:13:09.648 "memory_domains": [ 00:13:09.648 { 00:13:09.648 "dma_device_id": "system", 00:13:09.648 "dma_device_type": 1 00:13:09.648 }, 00:13:09.648 { 00:13:09.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.648 "dma_device_type": 2 00:13:09.648 } 00:13:09.648 ], 00:13:09.648 "driver_specific": {} 00:13:09.648 } 00:13:09.648 ] 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.648 "name": "Existed_Raid", 00:13:09.648 "uuid": "bccb64af-415f-495c-8020-b4b48ef322f7", 00:13:09.648 "strip_size_kb": 64, 00:13:09.648 "state": "online", 00:13:09.648 "raid_level": "concat", 00:13:09.648 "superblock": true, 00:13:09.648 "num_base_bdevs": 4, 00:13:09.648 "num_base_bdevs_discovered": 4, 00:13:09.648 "num_base_bdevs_operational": 4, 00:13:09.648 "base_bdevs_list": [ 00:13:09.648 { 00:13:09.648 "name": "BaseBdev1", 00:13:09.648 "uuid": "02f1a726-94b7-4166-be54-bf4c84c34834", 00:13:09.648 "is_configured": true, 00:13:09.648 "data_offset": 2048, 00:13:09.648 "data_size": 63488 00:13:09.648 }, 00:13:09.648 { 00:13:09.648 "name": "BaseBdev2", 00:13:09.648 "uuid": "ad7ced62-0463-4e90-b751-d60ba709964d", 00:13:09.648 "is_configured": true, 00:13:09.648 "data_offset": 2048, 00:13:09.648 "data_size": 63488 00:13:09.648 }, 00:13:09.648 { 00:13:09.648 "name": "BaseBdev3", 00:13:09.648 "uuid": "4eea114a-4f4e-464f-abc3-797bba6ae1a3", 00:13:09.648 "is_configured": true, 00:13:09.648 "data_offset": 2048, 00:13:09.648 "data_size": 63488 00:13:09.648 }, 00:13:09.648 { 00:13:09.648 "name": "BaseBdev4", 00:13:09.648 "uuid": "28cbd8e1-d039-472c-973d-118fd90f5058", 00:13:09.648 "is_configured": true, 00:13:09.648 "data_offset": 2048, 00:13:09.648 "data_size": 63488 00:13:09.648 } 00:13:09.648 ] 00:13:09.648 }' 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.648 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.907 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:09.907 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:09.907 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:09.907 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:09.907 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:09.907 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:09.907 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:09.907 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:09.907 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.907 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.167 [2024-11-27 04:30:06.497399] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:10.167 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.167 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:10.167 "name": "Existed_Raid", 00:13:10.167 "aliases": [ 00:13:10.167 "bccb64af-415f-495c-8020-b4b48ef322f7" 00:13:10.167 ], 00:13:10.167 "product_name": "Raid Volume", 00:13:10.167 "block_size": 512, 00:13:10.167 "num_blocks": 253952, 00:13:10.167 "uuid": "bccb64af-415f-495c-8020-b4b48ef322f7", 00:13:10.167 "assigned_rate_limits": { 00:13:10.167 "rw_ios_per_sec": 0, 00:13:10.167 "rw_mbytes_per_sec": 0, 00:13:10.167 "r_mbytes_per_sec": 0, 00:13:10.167 "w_mbytes_per_sec": 0 00:13:10.167 }, 00:13:10.167 "claimed": false, 00:13:10.167 "zoned": false, 00:13:10.167 "supported_io_types": { 00:13:10.167 "read": true, 00:13:10.167 "write": true, 00:13:10.167 "unmap": true, 00:13:10.167 "flush": true, 00:13:10.167 "reset": true, 00:13:10.167 "nvme_admin": false, 00:13:10.167 "nvme_io": false, 00:13:10.167 "nvme_io_md": false, 00:13:10.167 "write_zeroes": true, 00:13:10.167 "zcopy": false, 00:13:10.167 "get_zone_info": false, 00:13:10.167 "zone_management": false, 00:13:10.167 "zone_append": false, 00:13:10.167 "compare": false, 00:13:10.167 "compare_and_write": false, 00:13:10.167 "abort": false, 00:13:10.167 "seek_hole": false, 00:13:10.167 "seek_data": false, 00:13:10.167 "copy": false, 00:13:10.167 "nvme_iov_md": false 00:13:10.167 }, 00:13:10.167 "memory_domains": [ 00:13:10.167 { 00:13:10.167 "dma_device_id": "system", 00:13:10.167 "dma_device_type": 1 00:13:10.167 }, 00:13:10.167 { 00:13:10.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.167 "dma_device_type": 2 00:13:10.167 }, 00:13:10.167 { 00:13:10.167 "dma_device_id": "system", 00:13:10.167 "dma_device_type": 1 00:13:10.167 }, 00:13:10.167 { 00:13:10.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.167 "dma_device_type": 2 00:13:10.167 }, 00:13:10.167 { 00:13:10.167 "dma_device_id": "system", 00:13:10.167 "dma_device_type": 1 00:13:10.167 }, 00:13:10.167 { 00:13:10.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.167 "dma_device_type": 2 00:13:10.167 }, 00:13:10.167 { 00:13:10.167 "dma_device_id": "system", 00:13:10.167 "dma_device_type": 1 00:13:10.167 }, 00:13:10.167 { 00:13:10.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.167 "dma_device_type": 2 00:13:10.167 } 00:13:10.167 ], 00:13:10.167 "driver_specific": { 00:13:10.167 "raid": { 00:13:10.167 "uuid": "bccb64af-415f-495c-8020-b4b48ef322f7", 00:13:10.167 "strip_size_kb": 64, 00:13:10.167 "state": "online", 00:13:10.167 "raid_level": "concat", 00:13:10.167 "superblock": true, 00:13:10.167 "num_base_bdevs": 4, 00:13:10.167 "num_base_bdevs_discovered": 4, 00:13:10.167 "num_base_bdevs_operational": 4, 00:13:10.167 "base_bdevs_list": [ 00:13:10.167 { 00:13:10.167 "name": "BaseBdev1", 00:13:10.167 "uuid": "02f1a726-94b7-4166-be54-bf4c84c34834", 00:13:10.167 "is_configured": true, 00:13:10.167 "data_offset": 2048, 00:13:10.167 "data_size": 63488 00:13:10.167 }, 00:13:10.167 { 00:13:10.167 "name": "BaseBdev2", 00:13:10.167 "uuid": "ad7ced62-0463-4e90-b751-d60ba709964d", 00:13:10.167 "is_configured": true, 00:13:10.167 "data_offset": 2048, 00:13:10.167 "data_size": 63488 00:13:10.167 }, 00:13:10.167 { 00:13:10.167 "name": "BaseBdev3", 00:13:10.167 "uuid": "4eea114a-4f4e-464f-abc3-797bba6ae1a3", 00:13:10.167 "is_configured": true, 00:13:10.167 "data_offset": 2048, 00:13:10.167 "data_size": 63488 00:13:10.167 }, 00:13:10.167 { 00:13:10.167 "name": "BaseBdev4", 00:13:10.167 "uuid": "28cbd8e1-d039-472c-973d-118fd90f5058", 00:13:10.167 "is_configured": true, 00:13:10.167 "data_offset": 2048, 00:13:10.167 "data_size": 63488 00:13:10.167 } 00:13:10.167 ] 00:13:10.167 } 00:13:10.167 } 00:13:10.167 }' 00:13:10.167 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:10.167 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:10.167 BaseBdev2 00:13:10.167 BaseBdev3 00:13:10.167 BaseBdev4' 00:13:10.167 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.167 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:10.167 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:10.167 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.167 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:10.167 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.167 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.167 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.167 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:10.167 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:10.167 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:10.167 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:10.167 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.167 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.167 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.167 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.167 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:10.167 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:10.167 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:10.167 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:10.167 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.168 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.168 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.168 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.427 [2024-11-27 04:30:06.824611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:10.427 [2024-11-27 04:30:06.824766] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:10.427 [2024-11-27 04:30:06.824847] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.427 04:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.427 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.427 "name": "Existed_Raid", 00:13:10.427 "uuid": "bccb64af-415f-495c-8020-b4b48ef322f7", 00:13:10.427 "strip_size_kb": 64, 00:13:10.427 "state": "offline", 00:13:10.427 "raid_level": "concat", 00:13:10.427 "superblock": true, 00:13:10.427 "num_base_bdevs": 4, 00:13:10.427 "num_base_bdevs_discovered": 3, 00:13:10.427 "num_base_bdevs_operational": 3, 00:13:10.427 "base_bdevs_list": [ 00:13:10.427 { 00:13:10.427 "name": null, 00:13:10.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.427 "is_configured": false, 00:13:10.427 "data_offset": 0, 00:13:10.427 "data_size": 63488 00:13:10.427 }, 00:13:10.427 { 00:13:10.427 "name": "BaseBdev2", 00:13:10.427 "uuid": "ad7ced62-0463-4e90-b751-d60ba709964d", 00:13:10.427 "is_configured": true, 00:13:10.427 "data_offset": 2048, 00:13:10.427 "data_size": 63488 00:13:10.427 }, 00:13:10.427 { 00:13:10.427 "name": "BaseBdev3", 00:13:10.427 "uuid": "4eea114a-4f4e-464f-abc3-797bba6ae1a3", 00:13:10.427 "is_configured": true, 00:13:10.427 "data_offset": 2048, 00:13:10.427 "data_size": 63488 00:13:10.427 }, 00:13:10.427 { 00:13:10.427 "name": "BaseBdev4", 00:13:10.427 "uuid": "28cbd8e1-d039-472c-973d-118fd90f5058", 00:13:10.427 "is_configured": true, 00:13:10.427 "data_offset": 2048, 00:13:10.427 "data_size": 63488 00:13:10.427 } 00:13:10.427 ] 00:13:10.427 }' 00:13:10.427 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.427 04:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.997 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:10.997 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:10.997 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.997 04:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.998 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:10.998 04:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.998 04:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.998 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:10.998 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:10.998 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:10.998 04:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.998 04:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.998 [2024-11-27 04:30:07.442650] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:10.998 04:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.998 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:10.998 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:10.998 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.998 04:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.998 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:10.998 04:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.258 04:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.258 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:11.258 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:11.258 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:11.258 04:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.258 04:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.258 [2024-11-27 04:30:07.621366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:11.258 04:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.258 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:11.258 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:11.258 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.258 04:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.258 04:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.258 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:11.258 04:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.258 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:11.258 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:11.258 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:11.258 04:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.258 04:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.258 [2024-11-27 04:30:07.794876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:11.258 [2024-11-27 04:30:07.794961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:11.518 04:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.518 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:11.518 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:11.518 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.518 04:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.518 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:11.518 04:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.518 04:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.518 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:11.518 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:11.518 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:11.518 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:11.518 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:11.518 04:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:11.518 04:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.518 04:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.518 BaseBdev2 00:13:11.518 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.518 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:11.518 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:11.518 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:11.518 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:11.518 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:11.518 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:11.518 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:11.518 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.518 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.518 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.518 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:11.518 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.518 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.518 [ 00:13:11.518 { 00:13:11.518 "name": "BaseBdev2", 00:13:11.518 "aliases": [ 00:13:11.518 "35513949-0a98-4fac-ad84-b60049e891c1" 00:13:11.518 ], 00:13:11.518 "product_name": "Malloc disk", 00:13:11.518 "block_size": 512, 00:13:11.518 "num_blocks": 65536, 00:13:11.518 "uuid": "35513949-0a98-4fac-ad84-b60049e891c1", 00:13:11.518 "assigned_rate_limits": { 00:13:11.518 "rw_ios_per_sec": 0, 00:13:11.518 "rw_mbytes_per_sec": 0, 00:13:11.518 "r_mbytes_per_sec": 0, 00:13:11.518 "w_mbytes_per_sec": 0 00:13:11.518 }, 00:13:11.518 "claimed": false, 00:13:11.518 "zoned": false, 00:13:11.518 "supported_io_types": { 00:13:11.518 "read": true, 00:13:11.518 "write": true, 00:13:11.518 "unmap": true, 00:13:11.518 "flush": true, 00:13:11.518 "reset": true, 00:13:11.518 "nvme_admin": false, 00:13:11.518 "nvme_io": false, 00:13:11.518 "nvme_io_md": false, 00:13:11.518 "write_zeroes": true, 00:13:11.518 "zcopy": true, 00:13:11.518 "get_zone_info": false, 00:13:11.518 "zone_management": false, 00:13:11.518 "zone_append": false, 00:13:11.518 "compare": false, 00:13:11.518 "compare_and_write": false, 00:13:11.518 "abort": true, 00:13:11.518 "seek_hole": false, 00:13:11.518 "seek_data": false, 00:13:11.518 "copy": true, 00:13:11.518 "nvme_iov_md": false 00:13:11.518 }, 00:13:11.518 "memory_domains": [ 00:13:11.518 { 00:13:11.518 "dma_device_id": "system", 00:13:11.518 "dma_device_type": 1 00:13:11.518 }, 00:13:11.518 { 00:13:11.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.518 "dma_device_type": 2 00:13:11.518 } 00:13:11.518 ], 00:13:11.518 "driver_specific": {} 00:13:11.518 } 00:13:11.518 ] 00:13:11.518 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.518 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:11.518 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:11.518 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:11.518 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:11.518 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.518 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.778 BaseBdev3 00:13:11.778 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.778 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:11.778 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:11.778 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:11.778 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.779 [ 00:13:11.779 { 00:13:11.779 "name": "BaseBdev3", 00:13:11.779 "aliases": [ 00:13:11.779 "931f0d19-5577-42c2-8c30-4e6bfdc115eb" 00:13:11.779 ], 00:13:11.779 "product_name": "Malloc disk", 00:13:11.779 "block_size": 512, 00:13:11.779 "num_blocks": 65536, 00:13:11.779 "uuid": "931f0d19-5577-42c2-8c30-4e6bfdc115eb", 00:13:11.779 "assigned_rate_limits": { 00:13:11.779 "rw_ios_per_sec": 0, 00:13:11.779 "rw_mbytes_per_sec": 0, 00:13:11.779 "r_mbytes_per_sec": 0, 00:13:11.779 "w_mbytes_per_sec": 0 00:13:11.779 }, 00:13:11.779 "claimed": false, 00:13:11.779 "zoned": false, 00:13:11.779 "supported_io_types": { 00:13:11.779 "read": true, 00:13:11.779 "write": true, 00:13:11.779 "unmap": true, 00:13:11.779 "flush": true, 00:13:11.779 "reset": true, 00:13:11.779 "nvme_admin": false, 00:13:11.779 "nvme_io": false, 00:13:11.779 "nvme_io_md": false, 00:13:11.779 "write_zeroes": true, 00:13:11.779 "zcopy": true, 00:13:11.779 "get_zone_info": false, 00:13:11.779 "zone_management": false, 00:13:11.779 "zone_append": false, 00:13:11.779 "compare": false, 00:13:11.779 "compare_and_write": false, 00:13:11.779 "abort": true, 00:13:11.779 "seek_hole": false, 00:13:11.779 "seek_data": false, 00:13:11.779 "copy": true, 00:13:11.779 "nvme_iov_md": false 00:13:11.779 }, 00:13:11.779 "memory_domains": [ 00:13:11.779 { 00:13:11.779 "dma_device_id": "system", 00:13:11.779 "dma_device_type": 1 00:13:11.779 }, 00:13:11.779 { 00:13:11.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.779 "dma_device_type": 2 00:13:11.779 } 00:13:11.779 ], 00:13:11.779 "driver_specific": {} 00:13:11.779 } 00:13:11.779 ] 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.779 BaseBdev4 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.779 [ 00:13:11.779 { 00:13:11.779 "name": "BaseBdev4", 00:13:11.779 "aliases": [ 00:13:11.779 "cb77c8c3-8ad8-4879-847c-099655a55fa3" 00:13:11.779 ], 00:13:11.779 "product_name": "Malloc disk", 00:13:11.779 "block_size": 512, 00:13:11.779 "num_blocks": 65536, 00:13:11.779 "uuid": "cb77c8c3-8ad8-4879-847c-099655a55fa3", 00:13:11.779 "assigned_rate_limits": { 00:13:11.779 "rw_ios_per_sec": 0, 00:13:11.779 "rw_mbytes_per_sec": 0, 00:13:11.779 "r_mbytes_per_sec": 0, 00:13:11.779 "w_mbytes_per_sec": 0 00:13:11.779 }, 00:13:11.779 "claimed": false, 00:13:11.779 "zoned": false, 00:13:11.779 "supported_io_types": { 00:13:11.779 "read": true, 00:13:11.779 "write": true, 00:13:11.779 "unmap": true, 00:13:11.779 "flush": true, 00:13:11.779 "reset": true, 00:13:11.779 "nvme_admin": false, 00:13:11.779 "nvme_io": false, 00:13:11.779 "nvme_io_md": false, 00:13:11.779 "write_zeroes": true, 00:13:11.779 "zcopy": true, 00:13:11.779 "get_zone_info": false, 00:13:11.779 "zone_management": false, 00:13:11.779 "zone_append": false, 00:13:11.779 "compare": false, 00:13:11.779 "compare_and_write": false, 00:13:11.779 "abort": true, 00:13:11.779 "seek_hole": false, 00:13:11.779 "seek_data": false, 00:13:11.779 "copy": true, 00:13:11.779 "nvme_iov_md": false 00:13:11.779 }, 00:13:11.779 "memory_domains": [ 00:13:11.779 { 00:13:11.779 "dma_device_id": "system", 00:13:11.779 "dma_device_type": 1 00:13:11.779 }, 00:13:11.779 { 00:13:11.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.779 "dma_device_type": 2 00:13:11.779 } 00:13:11.779 ], 00:13:11.779 "driver_specific": {} 00:13:11.779 } 00:13:11.779 ] 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.779 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.779 [2024-11-27 04:30:08.228681] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:11.779 [2024-11-27 04:30:08.228842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:11.780 [2024-11-27 04:30:08.228895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:11.780 [2024-11-27 04:30:08.231210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:11.780 [2024-11-27 04:30:08.231310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:11.780 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.780 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:11.780 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.780 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.780 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:11.780 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.780 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.780 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.780 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.780 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.780 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.780 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.780 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.780 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.780 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.780 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.780 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.780 "name": "Existed_Raid", 00:13:11.780 "uuid": "bb101ed7-a557-4a95-b97e-9d1a3de95f8f", 00:13:11.780 "strip_size_kb": 64, 00:13:11.780 "state": "configuring", 00:13:11.780 "raid_level": "concat", 00:13:11.780 "superblock": true, 00:13:11.780 "num_base_bdevs": 4, 00:13:11.780 "num_base_bdevs_discovered": 3, 00:13:11.780 "num_base_bdevs_operational": 4, 00:13:11.780 "base_bdevs_list": [ 00:13:11.780 { 00:13:11.780 "name": "BaseBdev1", 00:13:11.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.780 "is_configured": false, 00:13:11.780 "data_offset": 0, 00:13:11.780 "data_size": 0 00:13:11.780 }, 00:13:11.780 { 00:13:11.780 "name": "BaseBdev2", 00:13:11.780 "uuid": "35513949-0a98-4fac-ad84-b60049e891c1", 00:13:11.780 "is_configured": true, 00:13:11.780 "data_offset": 2048, 00:13:11.780 "data_size": 63488 00:13:11.780 }, 00:13:11.780 { 00:13:11.780 "name": "BaseBdev3", 00:13:11.780 "uuid": "931f0d19-5577-42c2-8c30-4e6bfdc115eb", 00:13:11.780 "is_configured": true, 00:13:11.780 "data_offset": 2048, 00:13:11.780 "data_size": 63488 00:13:11.780 }, 00:13:11.780 { 00:13:11.780 "name": "BaseBdev4", 00:13:11.780 "uuid": "cb77c8c3-8ad8-4879-847c-099655a55fa3", 00:13:11.780 "is_configured": true, 00:13:11.780 "data_offset": 2048, 00:13:11.780 "data_size": 63488 00:13:11.780 } 00:13:11.780 ] 00:13:11.780 }' 00:13:11.780 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.780 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.350 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:12.350 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.350 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.350 [2024-11-27 04:30:08.731880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:12.350 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.350 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:12.350 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.350 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.350 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:12.350 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.350 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:12.350 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.350 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.350 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.350 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.350 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.350 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.350 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.350 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.350 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.350 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.350 "name": "Existed_Raid", 00:13:12.350 "uuid": "bb101ed7-a557-4a95-b97e-9d1a3de95f8f", 00:13:12.350 "strip_size_kb": 64, 00:13:12.350 "state": "configuring", 00:13:12.350 "raid_level": "concat", 00:13:12.350 "superblock": true, 00:13:12.350 "num_base_bdevs": 4, 00:13:12.350 "num_base_bdevs_discovered": 2, 00:13:12.350 "num_base_bdevs_operational": 4, 00:13:12.350 "base_bdevs_list": [ 00:13:12.350 { 00:13:12.350 "name": "BaseBdev1", 00:13:12.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.350 "is_configured": false, 00:13:12.350 "data_offset": 0, 00:13:12.350 "data_size": 0 00:13:12.350 }, 00:13:12.350 { 00:13:12.350 "name": null, 00:13:12.350 "uuid": "35513949-0a98-4fac-ad84-b60049e891c1", 00:13:12.350 "is_configured": false, 00:13:12.350 "data_offset": 0, 00:13:12.350 "data_size": 63488 00:13:12.350 }, 00:13:12.350 { 00:13:12.350 "name": "BaseBdev3", 00:13:12.350 "uuid": "931f0d19-5577-42c2-8c30-4e6bfdc115eb", 00:13:12.350 "is_configured": true, 00:13:12.350 "data_offset": 2048, 00:13:12.350 "data_size": 63488 00:13:12.350 }, 00:13:12.350 { 00:13:12.350 "name": "BaseBdev4", 00:13:12.350 "uuid": "cb77c8c3-8ad8-4879-847c-099655a55fa3", 00:13:12.350 "is_configured": true, 00:13:12.350 "data_offset": 2048, 00:13:12.350 "data_size": 63488 00:13:12.350 } 00:13:12.350 ] 00:13:12.350 }' 00:13:12.350 04:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.350 04:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.919 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:12.919 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.919 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.919 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.919 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.919 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:12.919 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:12.919 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.919 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.919 [2024-11-27 04:30:09.297427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:12.919 BaseBdev1 00:13:12.919 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.919 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:12.919 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:12.919 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:12.919 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:12.919 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:12.919 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:12.919 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:12.919 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.919 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.919 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.919 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:12.919 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.919 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.919 [ 00:13:12.919 { 00:13:12.919 "name": "BaseBdev1", 00:13:12.919 "aliases": [ 00:13:12.919 "d93b093c-57d1-42d1-8a73-aeaff494841f" 00:13:12.919 ], 00:13:12.919 "product_name": "Malloc disk", 00:13:12.919 "block_size": 512, 00:13:12.919 "num_blocks": 65536, 00:13:12.919 "uuid": "d93b093c-57d1-42d1-8a73-aeaff494841f", 00:13:12.919 "assigned_rate_limits": { 00:13:12.919 "rw_ios_per_sec": 0, 00:13:12.920 "rw_mbytes_per_sec": 0, 00:13:12.920 "r_mbytes_per_sec": 0, 00:13:12.920 "w_mbytes_per_sec": 0 00:13:12.920 }, 00:13:12.920 "claimed": true, 00:13:12.920 "claim_type": "exclusive_write", 00:13:12.920 "zoned": false, 00:13:12.920 "supported_io_types": { 00:13:12.920 "read": true, 00:13:12.920 "write": true, 00:13:12.920 "unmap": true, 00:13:12.920 "flush": true, 00:13:12.920 "reset": true, 00:13:12.920 "nvme_admin": false, 00:13:12.920 "nvme_io": false, 00:13:12.920 "nvme_io_md": false, 00:13:12.920 "write_zeroes": true, 00:13:12.920 "zcopy": true, 00:13:12.920 "get_zone_info": false, 00:13:12.920 "zone_management": false, 00:13:12.920 "zone_append": false, 00:13:12.920 "compare": false, 00:13:12.920 "compare_and_write": false, 00:13:12.920 "abort": true, 00:13:12.920 "seek_hole": false, 00:13:12.920 "seek_data": false, 00:13:12.920 "copy": true, 00:13:12.920 "nvme_iov_md": false 00:13:12.920 }, 00:13:12.920 "memory_domains": [ 00:13:12.920 { 00:13:12.920 "dma_device_id": "system", 00:13:12.920 "dma_device_type": 1 00:13:12.920 }, 00:13:12.920 { 00:13:12.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.920 "dma_device_type": 2 00:13:12.920 } 00:13:12.920 ], 00:13:12.920 "driver_specific": {} 00:13:12.920 } 00:13:12.920 ] 00:13:12.920 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.920 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:12.920 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:12.920 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.920 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.920 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:12.920 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.920 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:12.920 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.920 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.920 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.920 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.920 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.920 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.920 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.920 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.920 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.920 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.920 "name": "Existed_Raid", 00:13:12.920 "uuid": "bb101ed7-a557-4a95-b97e-9d1a3de95f8f", 00:13:12.920 "strip_size_kb": 64, 00:13:12.920 "state": "configuring", 00:13:12.920 "raid_level": "concat", 00:13:12.920 "superblock": true, 00:13:12.920 "num_base_bdevs": 4, 00:13:12.920 "num_base_bdevs_discovered": 3, 00:13:12.920 "num_base_bdevs_operational": 4, 00:13:12.920 "base_bdevs_list": [ 00:13:12.920 { 00:13:12.920 "name": "BaseBdev1", 00:13:12.920 "uuid": "d93b093c-57d1-42d1-8a73-aeaff494841f", 00:13:12.920 "is_configured": true, 00:13:12.920 "data_offset": 2048, 00:13:12.920 "data_size": 63488 00:13:12.920 }, 00:13:12.920 { 00:13:12.920 "name": null, 00:13:12.920 "uuid": "35513949-0a98-4fac-ad84-b60049e891c1", 00:13:12.920 "is_configured": false, 00:13:12.920 "data_offset": 0, 00:13:12.920 "data_size": 63488 00:13:12.920 }, 00:13:12.920 { 00:13:12.920 "name": "BaseBdev3", 00:13:12.920 "uuid": "931f0d19-5577-42c2-8c30-4e6bfdc115eb", 00:13:12.920 "is_configured": true, 00:13:12.920 "data_offset": 2048, 00:13:12.920 "data_size": 63488 00:13:12.920 }, 00:13:12.920 { 00:13:12.920 "name": "BaseBdev4", 00:13:12.920 "uuid": "cb77c8c3-8ad8-4879-847c-099655a55fa3", 00:13:12.920 "is_configured": true, 00:13:12.920 "data_offset": 2048, 00:13:12.920 "data_size": 63488 00:13:12.920 } 00:13:12.920 ] 00:13:12.920 }' 00:13:12.920 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.920 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.488 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:13.488 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.488 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.488 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.488 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.488 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:13.488 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:13.488 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.488 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.488 [2024-11-27 04:30:09.840633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:13.488 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.488 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:13.488 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.488 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.488 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:13.488 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.488 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:13.488 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.488 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.488 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.488 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.488 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.488 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.488 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.488 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.488 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.488 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.488 "name": "Existed_Raid", 00:13:13.488 "uuid": "bb101ed7-a557-4a95-b97e-9d1a3de95f8f", 00:13:13.489 "strip_size_kb": 64, 00:13:13.489 "state": "configuring", 00:13:13.489 "raid_level": "concat", 00:13:13.489 "superblock": true, 00:13:13.489 "num_base_bdevs": 4, 00:13:13.489 "num_base_bdevs_discovered": 2, 00:13:13.489 "num_base_bdevs_operational": 4, 00:13:13.489 "base_bdevs_list": [ 00:13:13.489 { 00:13:13.489 "name": "BaseBdev1", 00:13:13.489 "uuid": "d93b093c-57d1-42d1-8a73-aeaff494841f", 00:13:13.489 "is_configured": true, 00:13:13.489 "data_offset": 2048, 00:13:13.489 "data_size": 63488 00:13:13.489 }, 00:13:13.489 { 00:13:13.489 "name": null, 00:13:13.489 "uuid": "35513949-0a98-4fac-ad84-b60049e891c1", 00:13:13.489 "is_configured": false, 00:13:13.489 "data_offset": 0, 00:13:13.489 "data_size": 63488 00:13:13.489 }, 00:13:13.489 { 00:13:13.489 "name": null, 00:13:13.489 "uuid": "931f0d19-5577-42c2-8c30-4e6bfdc115eb", 00:13:13.489 "is_configured": false, 00:13:13.489 "data_offset": 0, 00:13:13.489 "data_size": 63488 00:13:13.489 }, 00:13:13.489 { 00:13:13.489 "name": "BaseBdev4", 00:13:13.489 "uuid": "cb77c8c3-8ad8-4879-847c-099655a55fa3", 00:13:13.489 "is_configured": true, 00:13:13.489 "data_offset": 2048, 00:13:13.489 "data_size": 63488 00:13:13.489 } 00:13:13.489 ] 00:13:13.489 }' 00:13:13.489 04:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.489 04:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.748 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.748 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:13.748 04:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.748 04:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.748 04:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.748 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:13.748 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:13.748 04:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.748 04:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.748 [2024-11-27 04:30:10.295842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:13.748 04:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.748 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:13.748 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.748 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.748 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:13.748 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.748 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:13.748 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.748 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.748 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.748 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.748 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.748 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.748 04:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.748 04:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.748 04:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.008 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.008 "name": "Existed_Raid", 00:13:14.008 "uuid": "bb101ed7-a557-4a95-b97e-9d1a3de95f8f", 00:13:14.008 "strip_size_kb": 64, 00:13:14.008 "state": "configuring", 00:13:14.008 "raid_level": "concat", 00:13:14.008 "superblock": true, 00:13:14.008 "num_base_bdevs": 4, 00:13:14.008 "num_base_bdevs_discovered": 3, 00:13:14.008 "num_base_bdevs_operational": 4, 00:13:14.008 "base_bdevs_list": [ 00:13:14.008 { 00:13:14.008 "name": "BaseBdev1", 00:13:14.008 "uuid": "d93b093c-57d1-42d1-8a73-aeaff494841f", 00:13:14.008 "is_configured": true, 00:13:14.008 "data_offset": 2048, 00:13:14.008 "data_size": 63488 00:13:14.008 }, 00:13:14.008 { 00:13:14.008 "name": null, 00:13:14.008 "uuid": "35513949-0a98-4fac-ad84-b60049e891c1", 00:13:14.008 "is_configured": false, 00:13:14.008 "data_offset": 0, 00:13:14.008 "data_size": 63488 00:13:14.008 }, 00:13:14.008 { 00:13:14.008 "name": "BaseBdev3", 00:13:14.008 "uuid": "931f0d19-5577-42c2-8c30-4e6bfdc115eb", 00:13:14.008 "is_configured": true, 00:13:14.008 "data_offset": 2048, 00:13:14.008 "data_size": 63488 00:13:14.008 }, 00:13:14.008 { 00:13:14.008 "name": "BaseBdev4", 00:13:14.008 "uuid": "cb77c8c3-8ad8-4879-847c-099655a55fa3", 00:13:14.008 "is_configured": true, 00:13:14.008 "data_offset": 2048, 00:13:14.008 "data_size": 63488 00:13:14.008 } 00:13:14.008 ] 00:13:14.008 }' 00:13:14.008 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.008 04:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.268 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:14.268 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.268 04:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.268 04:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.268 04:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.268 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:14.268 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:14.268 04:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.268 04:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.268 [2024-11-27 04:30:10.779222] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:14.527 04:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.527 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:14.527 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.527 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.527 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:14.527 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.527 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:14.527 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.527 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.527 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.527 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.527 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.527 04:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.527 04:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.527 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.527 04:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.527 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.527 "name": "Existed_Raid", 00:13:14.527 "uuid": "bb101ed7-a557-4a95-b97e-9d1a3de95f8f", 00:13:14.527 "strip_size_kb": 64, 00:13:14.527 "state": "configuring", 00:13:14.527 "raid_level": "concat", 00:13:14.527 "superblock": true, 00:13:14.527 "num_base_bdevs": 4, 00:13:14.527 "num_base_bdevs_discovered": 2, 00:13:14.527 "num_base_bdevs_operational": 4, 00:13:14.527 "base_bdevs_list": [ 00:13:14.527 { 00:13:14.527 "name": null, 00:13:14.527 "uuid": "d93b093c-57d1-42d1-8a73-aeaff494841f", 00:13:14.527 "is_configured": false, 00:13:14.527 "data_offset": 0, 00:13:14.527 "data_size": 63488 00:13:14.527 }, 00:13:14.527 { 00:13:14.527 "name": null, 00:13:14.527 "uuid": "35513949-0a98-4fac-ad84-b60049e891c1", 00:13:14.527 "is_configured": false, 00:13:14.527 "data_offset": 0, 00:13:14.527 "data_size": 63488 00:13:14.527 }, 00:13:14.527 { 00:13:14.527 "name": "BaseBdev3", 00:13:14.527 "uuid": "931f0d19-5577-42c2-8c30-4e6bfdc115eb", 00:13:14.527 "is_configured": true, 00:13:14.527 "data_offset": 2048, 00:13:14.527 "data_size": 63488 00:13:14.527 }, 00:13:14.527 { 00:13:14.527 "name": "BaseBdev4", 00:13:14.527 "uuid": "cb77c8c3-8ad8-4879-847c-099655a55fa3", 00:13:14.527 "is_configured": true, 00:13:14.527 "data_offset": 2048, 00:13:14.527 "data_size": 63488 00:13:14.527 } 00:13:14.527 ] 00:13:14.527 }' 00:13:14.527 04:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.527 04:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.786 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.786 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.786 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.786 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:14.786 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.786 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:14.786 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:14.786 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.786 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.045 [2024-11-27 04:30:11.377134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:15.045 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.045 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:15.045 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.045 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.045 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:15.045 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.045 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:15.045 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.045 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.045 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.045 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.045 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.045 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.045 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.045 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.045 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.045 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.045 "name": "Existed_Raid", 00:13:15.045 "uuid": "bb101ed7-a557-4a95-b97e-9d1a3de95f8f", 00:13:15.045 "strip_size_kb": 64, 00:13:15.045 "state": "configuring", 00:13:15.045 "raid_level": "concat", 00:13:15.045 "superblock": true, 00:13:15.045 "num_base_bdevs": 4, 00:13:15.045 "num_base_bdevs_discovered": 3, 00:13:15.045 "num_base_bdevs_operational": 4, 00:13:15.045 "base_bdevs_list": [ 00:13:15.045 { 00:13:15.045 "name": null, 00:13:15.045 "uuid": "d93b093c-57d1-42d1-8a73-aeaff494841f", 00:13:15.045 "is_configured": false, 00:13:15.045 "data_offset": 0, 00:13:15.045 "data_size": 63488 00:13:15.045 }, 00:13:15.045 { 00:13:15.045 "name": "BaseBdev2", 00:13:15.045 "uuid": "35513949-0a98-4fac-ad84-b60049e891c1", 00:13:15.045 "is_configured": true, 00:13:15.045 "data_offset": 2048, 00:13:15.045 "data_size": 63488 00:13:15.045 }, 00:13:15.045 { 00:13:15.045 "name": "BaseBdev3", 00:13:15.045 "uuid": "931f0d19-5577-42c2-8c30-4e6bfdc115eb", 00:13:15.045 "is_configured": true, 00:13:15.045 "data_offset": 2048, 00:13:15.045 "data_size": 63488 00:13:15.045 }, 00:13:15.045 { 00:13:15.045 "name": "BaseBdev4", 00:13:15.045 "uuid": "cb77c8c3-8ad8-4879-847c-099655a55fa3", 00:13:15.045 "is_configured": true, 00:13:15.045 "data_offset": 2048, 00:13:15.045 "data_size": 63488 00:13:15.045 } 00:13:15.045 ] 00:13:15.045 }' 00:13:15.045 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.045 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.305 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.305 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:15.305 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.305 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.305 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.305 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:15.305 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.305 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.305 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:15.305 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.305 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.305 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d93b093c-57d1-42d1-8a73-aeaff494841f 00:13:15.305 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.305 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.564 [2024-11-27 04:30:11.924131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:15.564 [2024-11-27 04:30:11.924517] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:15.564 [2024-11-27 04:30:11.924568] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:15.564 [2024-11-27 04:30:11.924901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:15.564 NewBaseBdev 00:13:15.564 [2024-11-27 04:30:11.925142] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:15.564 [2024-11-27 04:30:11.925160] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:15.564 [2024-11-27 04:30:11.925321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.564 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.564 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:15.564 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:15.564 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:15.564 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:15.564 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:15.564 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:15.564 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:15.564 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.564 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.564 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.564 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:15.564 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.564 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.564 [ 00:13:15.564 { 00:13:15.564 "name": "NewBaseBdev", 00:13:15.564 "aliases": [ 00:13:15.564 "d93b093c-57d1-42d1-8a73-aeaff494841f" 00:13:15.565 ], 00:13:15.565 "product_name": "Malloc disk", 00:13:15.565 "block_size": 512, 00:13:15.565 "num_blocks": 65536, 00:13:15.565 "uuid": "d93b093c-57d1-42d1-8a73-aeaff494841f", 00:13:15.565 "assigned_rate_limits": { 00:13:15.565 "rw_ios_per_sec": 0, 00:13:15.565 "rw_mbytes_per_sec": 0, 00:13:15.565 "r_mbytes_per_sec": 0, 00:13:15.565 "w_mbytes_per_sec": 0 00:13:15.565 }, 00:13:15.565 "claimed": true, 00:13:15.565 "claim_type": "exclusive_write", 00:13:15.565 "zoned": false, 00:13:15.565 "supported_io_types": { 00:13:15.565 "read": true, 00:13:15.565 "write": true, 00:13:15.565 "unmap": true, 00:13:15.565 "flush": true, 00:13:15.565 "reset": true, 00:13:15.565 "nvme_admin": false, 00:13:15.565 "nvme_io": false, 00:13:15.565 "nvme_io_md": false, 00:13:15.565 "write_zeroes": true, 00:13:15.565 "zcopy": true, 00:13:15.565 "get_zone_info": false, 00:13:15.565 "zone_management": false, 00:13:15.565 "zone_append": false, 00:13:15.565 "compare": false, 00:13:15.565 "compare_and_write": false, 00:13:15.565 "abort": true, 00:13:15.565 "seek_hole": false, 00:13:15.565 "seek_data": false, 00:13:15.565 "copy": true, 00:13:15.565 "nvme_iov_md": false 00:13:15.565 }, 00:13:15.565 "memory_domains": [ 00:13:15.565 { 00:13:15.565 "dma_device_id": "system", 00:13:15.565 "dma_device_type": 1 00:13:15.565 }, 00:13:15.565 { 00:13:15.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.565 "dma_device_type": 2 00:13:15.565 } 00:13:15.565 ], 00:13:15.565 "driver_specific": {} 00:13:15.565 } 00:13:15.565 ] 00:13:15.565 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.565 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:15.565 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:15.565 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.565 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.565 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:15.565 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.565 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:15.565 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.565 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.565 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.565 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.565 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.565 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.565 04:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.565 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.565 04:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.565 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.565 "name": "Existed_Raid", 00:13:15.565 "uuid": "bb101ed7-a557-4a95-b97e-9d1a3de95f8f", 00:13:15.565 "strip_size_kb": 64, 00:13:15.565 "state": "online", 00:13:15.565 "raid_level": "concat", 00:13:15.565 "superblock": true, 00:13:15.565 "num_base_bdevs": 4, 00:13:15.565 "num_base_bdevs_discovered": 4, 00:13:15.565 "num_base_bdevs_operational": 4, 00:13:15.565 "base_bdevs_list": [ 00:13:15.565 { 00:13:15.565 "name": "NewBaseBdev", 00:13:15.565 "uuid": "d93b093c-57d1-42d1-8a73-aeaff494841f", 00:13:15.565 "is_configured": true, 00:13:15.565 "data_offset": 2048, 00:13:15.565 "data_size": 63488 00:13:15.565 }, 00:13:15.565 { 00:13:15.565 "name": "BaseBdev2", 00:13:15.565 "uuid": "35513949-0a98-4fac-ad84-b60049e891c1", 00:13:15.565 "is_configured": true, 00:13:15.565 "data_offset": 2048, 00:13:15.565 "data_size": 63488 00:13:15.565 }, 00:13:15.565 { 00:13:15.565 "name": "BaseBdev3", 00:13:15.565 "uuid": "931f0d19-5577-42c2-8c30-4e6bfdc115eb", 00:13:15.565 "is_configured": true, 00:13:15.565 "data_offset": 2048, 00:13:15.565 "data_size": 63488 00:13:15.565 }, 00:13:15.565 { 00:13:15.565 "name": "BaseBdev4", 00:13:15.565 "uuid": "cb77c8c3-8ad8-4879-847c-099655a55fa3", 00:13:15.565 "is_configured": true, 00:13:15.565 "data_offset": 2048, 00:13:15.565 "data_size": 63488 00:13:15.565 } 00:13:15.565 ] 00:13:15.565 }' 00:13:15.565 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.565 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.825 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:15.825 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:15.825 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:15.825 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:15.825 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:15.825 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:15.825 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:15.825 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:15.825 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.825 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.825 [2024-11-27 04:30:12.359983] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:15.825 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.825 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:15.825 "name": "Existed_Raid", 00:13:15.825 "aliases": [ 00:13:15.825 "bb101ed7-a557-4a95-b97e-9d1a3de95f8f" 00:13:15.825 ], 00:13:15.825 "product_name": "Raid Volume", 00:13:15.825 "block_size": 512, 00:13:15.825 "num_blocks": 253952, 00:13:15.825 "uuid": "bb101ed7-a557-4a95-b97e-9d1a3de95f8f", 00:13:15.825 "assigned_rate_limits": { 00:13:15.825 "rw_ios_per_sec": 0, 00:13:15.825 "rw_mbytes_per_sec": 0, 00:13:15.825 "r_mbytes_per_sec": 0, 00:13:15.825 "w_mbytes_per_sec": 0 00:13:15.825 }, 00:13:15.825 "claimed": false, 00:13:15.825 "zoned": false, 00:13:15.825 "supported_io_types": { 00:13:15.825 "read": true, 00:13:15.825 "write": true, 00:13:15.825 "unmap": true, 00:13:15.825 "flush": true, 00:13:15.825 "reset": true, 00:13:15.825 "nvme_admin": false, 00:13:15.825 "nvme_io": false, 00:13:15.825 "nvme_io_md": false, 00:13:15.825 "write_zeroes": true, 00:13:15.825 "zcopy": false, 00:13:15.825 "get_zone_info": false, 00:13:15.825 "zone_management": false, 00:13:15.825 "zone_append": false, 00:13:15.825 "compare": false, 00:13:15.825 "compare_and_write": false, 00:13:15.825 "abort": false, 00:13:15.825 "seek_hole": false, 00:13:15.825 "seek_data": false, 00:13:15.825 "copy": false, 00:13:15.825 "nvme_iov_md": false 00:13:15.825 }, 00:13:15.825 "memory_domains": [ 00:13:15.825 { 00:13:15.825 "dma_device_id": "system", 00:13:15.825 "dma_device_type": 1 00:13:15.825 }, 00:13:15.825 { 00:13:15.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.825 "dma_device_type": 2 00:13:15.825 }, 00:13:15.825 { 00:13:15.825 "dma_device_id": "system", 00:13:15.825 "dma_device_type": 1 00:13:15.825 }, 00:13:15.825 { 00:13:15.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.825 "dma_device_type": 2 00:13:15.825 }, 00:13:15.825 { 00:13:15.825 "dma_device_id": "system", 00:13:15.825 "dma_device_type": 1 00:13:15.825 }, 00:13:15.825 { 00:13:15.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.825 "dma_device_type": 2 00:13:15.825 }, 00:13:15.825 { 00:13:15.825 "dma_device_id": "system", 00:13:15.825 "dma_device_type": 1 00:13:15.825 }, 00:13:15.825 { 00:13:15.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.825 "dma_device_type": 2 00:13:15.825 } 00:13:15.825 ], 00:13:15.825 "driver_specific": { 00:13:15.825 "raid": { 00:13:15.825 "uuid": "bb101ed7-a557-4a95-b97e-9d1a3de95f8f", 00:13:15.825 "strip_size_kb": 64, 00:13:15.825 "state": "online", 00:13:15.825 "raid_level": "concat", 00:13:15.825 "superblock": true, 00:13:15.825 "num_base_bdevs": 4, 00:13:15.825 "num_base_bdevs_discovered": 4, 00:13:15.825 "num_base_bdevs_operational": 4, 00:13:15.825 "base_bdevs_list": [ 00:13:15.825 { 00:13:15.825 "name": "NewBaseBdev", 00:13:15.825 "uuid": "d93b093c-57d1-42d1-8a73-aeaff494841f", 00:13:15.825 "is_configured": true, 00:13:15.825 "data_offset": 2048, 00:13:15.825 "data_size": 63488 00:13:15.825 }, 00:13:15.825 { 00:13:15.825 "name": "BaseBdev2", 00:13:15.825 "uuid": "35513949-0a98-4fac-ad84-b60049e891c1", 00:13:15.825 "is_configured": true, 00:13:15.825 "data_offset": 2048, 00:13:15.825 "data_size": 63488 00:13:15.825 }, 00:13:15.825 { 00:13:15.825 "name": "BaseBdev3", 00:13:15.825 "uuid": "931f0d19-5577-42c2-8c30-4e6bfdc115eb", 00:13:15.825 "is_configured": true, 00:13:15.825 "data_offset": 2048, 00:13:15.825 "data_size": 63488 00:13:15.825 }, 00:13:15.825 { 00:13:15.825 "name": "BaseBdev4", 00:13:15.825 "uuid": "cb77c8c3-8ad8-4879-847c-099655a55fa3", 00:13:15.825 "is_configured": true, 00:13:15.825 "data_offset": 2048, 00:13:15.825 "data_size": 63488 00:13:15.825 } 00:13:15.825 ] 00:13:15.825 } 00:13:15.825 } 00:13:15.825 }' 00:13:15.825 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:16.084 BaseBdev2 00:13:16.084 BaseBdev3 00:13:16.084 BaseBdev4' 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.084 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.352 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.352 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.352 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:16.352 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.352 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.352 [2024-11-27 04:30:12.698960] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:16.352 [2024-11-27 04:30:12.699013] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:16.352 [2024-11-27 04:30:12.699113] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:16.352 [2024-11-27 04:30:12.699194] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:16.352 [2024-11-27 04:30:12.699206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:16.352 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.352 04:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72236 00:13:16.352 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72236 ']' 00:13:16.352 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72236 00:13:16.352 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:16.352 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:16.352 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72236 00:13:16.352 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:16.352 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:16.352 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72236' 00:13:16.352 killing process with pid 72236 00:13:16.352 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72236 00:13:16.352 [2024-11-27 04:30:12.747129] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:16.352 04:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72236 00:13:16.934 [2024-11-27 04:30:13.219788] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:18.313 04:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:18.313 00:13:18.313 real 0m12.078s 00:13:18.313 user 0m18.737s 00:13:18.313 sys 0m2.225s 00:13:18.313 04:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.313 04:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.313 ************************************ 00:13:18.313 END TEST raid_state_function_test_sb 00:13:18.313 ************************************ 00:13:18.313 04:30:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:13:18.313 04:30:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:18.313 04:30:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.313 04:30:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:18.313 ************************************ 00:13:18.313 START TEST raid_superblock_test 00:13:18.313 ************************************ 00:13:18.313 04:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:13:18.313 04:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:13:18.313 04:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:18.313 04:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:18.313 04:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:18.313 04:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:18.313 04:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:18.313 04:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:18.313 04:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:18.313 04:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:18.313 04:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:18.313 04:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:18.313 04:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:18.313 04:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:18.313 04:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:13:18.313 04:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:18.313 04:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:18.313 04:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72913 00:13:18.313 04:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:18.313 04:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72913 00:13:18.313 04:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72913 ']' 00:13:18.313 04:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.313 04:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:18.313 04:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.313 04:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:18.313 04:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.313 [2024-11-27 04:30:14.734599] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:13:18.313 [2024-11-27 04:30:14.734842] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72913 ] 00:13:18.313 [2024-11-27 04:30:14.893913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.571 [2024-11-27 04:30:15.052213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.830 [2024-11-27 04:30:15.323760] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:18.830 [2024-11-27 04:30:15.323959] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:19.088 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:19.088 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:19.088 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:19.088 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:19.088 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:19.088 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:19.088 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:19.089 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:19.089 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:19.090 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:19.090 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:19.090 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.090 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.090 malloc1 00:13:19.090 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.090 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:19.090 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.090 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.090 [2024-11-27 04:30:15.656931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:19.090 [2024-11-27 04:30:15.657155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.090 [2024-11-27 04:30:15.657227] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:19.090 [2024-11-27 04:30:15.657313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.090 [2024-11-27 04:30:15.660232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.090 [2024-11-27 04:30:15.660316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:19.090 pt1 00:13:19.090 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.090 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:19.090 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:19.090 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:19.090 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:19.090 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:19.090 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:19.091 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:19.091 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:19.091 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:19.091 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.091 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.350 malloc2 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.350 [2024-11-27 04:30:15.720878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:19.350 [2024-11-27 04:30:15.721039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.350 [2024-11-27 04:30:15.721110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:19.350 [2024-11-27 04:30:15.721146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.350 [2024-11-27 04:30:15.723782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.350 [2024-11-27 04:30:15.723866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:19.350 pt2 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.350 malloc3 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.350 [2024-11-27 04:30:15.798156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:19.350 [2024-11-27 04:30:15.798219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.350 [2024-11-27 04:30:15.798243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:19.350 [2024-11-27 04:30:15.798254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.350 [2024-11-27 04:30:15.800606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.350 [2024-11-27 04:30:15.800725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:19.350 pt3 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.350 malloc4 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.350 [2024-11-27 04:30:15.862730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:19.350 [2024-11-27 04:30:15.862876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.350 [2024-11-27 04:30:15.862919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:19.350 [2024-11-27 04:30:15.862957] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.350 [2024-11-27 04:30:15.865465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.350 [2024-11-27 04:30:15.865533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:19.350 pt4 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.350 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.350 [2024-11-27 04:30:15.874759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:19.350 [2024-11-27 04:30:15.876975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:19.350 [2024-11-27 04:30:15.877125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:19.351 [2024-11-27 04:30:15.877211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:19.351 [2024-11-27 04:30:15.877434] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:19.351 [2024-11-27 04:30:15.877477] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:19.351 [2024-11-27 04:30:15.877765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:19.351 [2024-11-27 04:30:15.877983] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:19.351 [2024-11-27 04:30:15.878029] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:19.351 [2024-11-27 04:30:15.878236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.351 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.351 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:19.351 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.351 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.351 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:19.351 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.351 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:19.351 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.351 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.351 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.351 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.351 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.351 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.351 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.351 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.351 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.351 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.351 "name": "raid_bdev1", 00:13:19.351 "uuid": "57382951-2ec6-40b5-87b0-d0e9c6ce1866", 00:13:19.351 "strip_size_kb": 64, 00:13:19.351 "state": "online", 00:13:19.351 "raid_level": "concat", 00:13:19.351 "superblock": true, 00:13:19.351 "num_base_bdevs": 4, 00:13:19.351 "num_base_bdevs_discovered": 4, 00:13:19.351 "num_base_bdevs_operational": 4, 00:13:19.351 "base_bdevs_list": [ 00:13:19.351 { 00:13:19.351 "name": "pt1", 00:13:19.351 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:19.351 "is_configured": true, 00:13:19.351 "data_offset": 2048, 00:13:19.351 "data_size": 63488 00:13:19.351 }, 00:13:19.351 { 00:13:19.351 "name": "pt2", 00:13:19.351 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:19.351 "is_configured": true, 00:13:19.351 "data_offset": 2048, 00:13:19.351 "data_size": 63488 00:13:19.351 }, 00:13:19.351 { 00:13:19.351 "name": "pt3", 00:13:19.351 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:19.351 "is_configured": true, 00:13:19.351 "data_offset": 2048, 00:13:19.351 "data_size": 63488 00:13:19.351 }, 00:13:19.351 { 00:13:19.351 "name": "pt4", 00:13:19.351 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:19.351 "is_configured": true, 00:13:19.351 "data_offset": 2048, 00:13:19.351 "data_size": 63488 00:13:19.351 } 00:13:19.351 ] 00:13:19.351 }' 00:13:19.351 04:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.351 04:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.915 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:19.915 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:19.915 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:19.915 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:19.915 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:19.915 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:19.915 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:19.915 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:19.915 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.915 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.915 [2024-11-27 04:30:16.330422] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:19.915 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.915 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:19.915 "name": "raid_bdev1", 00:13:19.915 "aliases": [ 00:13:19.915 "57382951-2ec6-40b5-87b0-d0e9c6ce1866" 00:13:19.915 ], 00:13:19.915 "product_name": "Raid Volume", 00:13:19.915 "block_size": 512, 00:13:19.915 "num_blocks": 253952, 00:13:19.915 "uuid": "57382951-2ec6-40b5-87b0-d0e9c6ce1866", 00:13:19.915 "assigned_rate_limits": { 00:13:19.915 "rw_ios_per_sec": 0, 00:13:19.915 "rw_mbytes_per_sec": 0, 00:13:19.915 "r_mbytes_per_sec": 0, 00:13:19.915 "w_mbytes_per_sec": 0 00:13:19.915 }, 00:13:19.915 "claimed": false, 00:13:19.915 "zoned": false, 00:13:19.915 "supported_io_types": { 00:13:19.915 "read": true, 00:13:19.915 "write": true, 00:13:19.915 "unmap": true, 00:13:19.915 "flush": true, 00:13:19.915 "reset": true, 00:13:19.915 "nvme_admin": false, 00:13:19.915 "nvme_io": false, 00:13:19.915 "nvme_io_md": false, 00:13:19.915 "write_zeroes": true, 00:13:19.915 "zcopy": false, 00:13:19.915 "get_zone_info": false, 00:13:19.915 "zone_management": false, 00:13:19.915 "zone_append": false, 00:13:19.915 "compare": false, 00:13:19.915 "compare_and_write": false, 00:13:19.915 "abort": false, 00:13:19.915 "seek_hole": false, 00:13:19.915 "seek_data": false, 00:13:19.915 "copy": false, 00:13:19.915 "nvme_iov_md": false 00:13:19.915 }, 00:13:19.916 "memory_domains": [ 00:13:19.916 { 00:13:19.916 "dma_device_id": "system", 00:13:19.916 "dma_device_type": 1 00:13:19.916 }, 00:13:19.916 { 00:13:19.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.916 "dma_device_type": 2 00:13:19.916 }, 00:13:19.916 { 00:13:19.916 "dma_device_id": "system", 00:13:19.916 "dma_device_type": 1 00:13:19.916 }, 00:13:19.916 { 00:13:19.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.916 "dma_device_type": 2 00:13:19.916 }, 00:13:19.916 { 00:13:19.916 "dma_device_id": "system", 00:13:19.916 "dma_device_type": 1 00:13:19.916 }, 00:13:19.916 { 00:13:19.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.916 "dma_device_type": 2 00:13:19.916 }, 00:13:19.916 { 00:13:19.916 "dma_device_id": "system", 00:13:19.916 "dma_device_type": 1 00:13:19.916 }, 00:13:19.916 { 00:13:19.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.916 "dma_device_type": 2 00:13:19.916 } 00:13:19.916 ], 00:13:19.916 "driver_specific": { 00:13:19.916 "raid": { 00:13:19.916 "uuid": "57382951-2ec6-40b5-87b0-d0e9c6ce1866", 00:13:19.916 "strip_size_kb": 64, 00:13:19.916 "state": "online", 00:13:19.916 "raid_level": "concat", 00:13:19.916 "superblock": true, 00:13:19.916 "num_base_bdevs": 4, 00:13:19.916 "num_base_bdevs_discovered": 4, 00:13:19.916 "num_base_bdevs_operational": 4, 00:13:19.916 "base_bdevs_list": [ 00:13:19.916 { 00:13:19.916 "name": "pt1", 00:13:19.916 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:19.916 "is_configured": true, 00:13:19.916 "data_offset": 2048, 00:13:19.916 "data_size": 63488 00:13:19.916 }, 00:13:19.916 { 00:13:19.916 "name": "pt2", 00:13:19.916 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:19.916 "is_configured": true, 00:13:19.916 "data_offset": 2048, 00:13:19.916 "data_size": 63488 00:13:19.916 }, 00:13:19.916 { 00:13:19.916 "name": "pt3", 00:13:19.916 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:19.916 "is_configured": true, 00:13:19.916 "data_offset": 2048, 00:13:19.916 "data_size": 63488 00:13:19.916 }, 00:13:19.916 { 00:13:19.916 "name": "pt4", 00:13:19.916 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:19.916 "is_configured": true, 00:13:19.916 "data_offset": 2048, 00:13:19.916 "data_size": 63488 00:13:19.916 } 00:13:19.916 ] 00:13:19.916 } 00:13:19.916 } 00:13:19.916 }' 00:13:19.916 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:19.916 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:19.916 pt2 00:13:19.916 pt3 00:13:19.916 pt4' 00:13:19.916 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.916 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:19.916 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:19.916 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:19.916 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.916 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.916 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.916 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.174 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.174 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.174 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.174 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:20.174 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.174 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.174 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.174 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.174 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.175 [2024-11-27 04:30:16.669774] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=57382951-2ec6-40b5-87b0-d0e9c6ce1866 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 57382951-2ec6-40b5-87b0-d0e9c6ce1866 ']' 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.175 [2024-11-27 04:30:16.717344] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:20.175 [2024-11-27 04:30:16.717455] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:20.175 [2024-11-27 04:30:16.717586] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:20.175 [2024-11-27 04:30:16.717695] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:20.175 [2024-11-27 04:30:16.717745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.175 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.434 [2024-11-27 04:30:16.881199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:20.434 [2024-11-27 04:30:16.883555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:20.434 [2024-11-27 04:30:16.883619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:20.434 [2024-11-27 04:30:16.883658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:20.434 [2024-11-27 04:30:16.883726] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:20.434 [2024-11-27 04:30:16.883797] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:20.434 [2024-11-27 04:30:16.883819] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:20.434 [2024-11-27 04:30:16.883841] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:20.434 [2024-11-27 04:30:16.883857] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:20.434 [2024-11-27 04:30:16.883870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:20.434 request: 00:13:20.434 { 00:13:20.434 "name": "raid_bdev1", 00:13:20.434 "raid_level": "concat", 00:13:20.434 "base_bdevs": [ 00:13:20.434 "malloc1", 00:13:20.434 "malloc2", 00:13:20.434 "malloc3", 00:13:20.434 "malloc4" 00:13:20.434 ], 00:13:20.434 "strip_size_kb": 64, 00:13:20.434 "superblock": false, 00:13:20.434 "method": "bdev_raid_create", 00:13:20.434 "req_id": 1 00:13:20.434 } 00:13:20.434 Got JSON-RPC error response 00:13:20.434 response: 00:13:20.434 { 00:13:20.434 "code": -17, 00:13:20.434 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:20.434 } 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:20.434 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.435 [2024-11-27 04:30:16.948930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:20.435 [2024-11-27 04:30:16.949101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.435 [2024-11-27 04:30:16.949147] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:20.435 [2024-11-27 04:30:16.949184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.435 [2024-11-27 04:30:16.951908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.435 [2024-11-27 04:30:16.952014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:20.435 [2024-11-27 04:30:16.952163] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:20.435 [2024-11-27 04:30:16.952275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:20.435 pt1 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.435 "name": "raid_bdev1", 00:13:20.435 "uuid": "57382951-2ec6-40b5-87b0-d0e9c6ce1866", 00:13:20.435 "strip_size_kb": 64, 00:13:20.435 "state": "configuring", 00:13:20.435 "raid_level": "concat", 00:13:20.435 "superblock": true, 00:13:20.435 "num_base_bdevs": 4, 00:13:20.435 "num_base_bdevs_discovered": 1, 00:13:20.435 "num_base_bdevs_operational": 4, 00:13:20.435 "base_bdevs_list": [ 00:13:20.435 { 00:13:20.435 "name": "pt1", 00:13:20.435 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:20.435 "is_configured": true, 00:13:20.435 "data_offset": 2048, 00:13:20.435 "data_size": 63488 00:13:20.435 }, 00:13:20.435 { 00:13:20.435 "name": null, 00:13:20.435 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:20.435 "is_configured": false, 00:13:20.435 "data_offset": 2048, 00:13:20.435 "data_size": 63488 00:13:20.435 }, 00:13:20.435 { 00:13:20.435 "name": null, 00:13:20.435 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:20.435 "is_configured": false, 00:13:20.435 "data_offset": 2048, 00:13:20.435 "data_size": 63488 00:13:20.435 }, 00:13:20.435 { 00:13:20.435 "name": null, 00:13:20.435 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:20.435 "is_configured": false, 00:13:20.435 "data_offset": 2048, 00:13:20.435 "data_size": 63488 00:13:20.435 } 00:13:20.435 ] 00:13:20.435 }' 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.435 04:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.002 [2024-11-27 04:30:17.416252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:21.002 [2024-11-27 04:30:17.416359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.002 [2024-11-27 04:30:17.416384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:21.002 [2024-11-27 04:30:17.416397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.002 [2024-11-27 04:30:17.416927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.002 [2024-11-27 04:30:17.416950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:21.002 [2024-11-27 04:30:17.417048] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:21.002 [2024-11-27 04:30:17.417077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:21.002 pt2 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.002 [2024-11-27 04:30:17.428202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.002 "name": "raid_bdev1", 00:13:21.002 "uuid": "57382951-2ec6-40b5-87b0-d0e9c6ce1866", 00:13:21.002 "strip_size_kb": 64, 00:13:21.002 "state": "configuring", 00:13:21.002 "raid_level": "concat", 00:13:21.002 "superblock": true, 00:13:21.002 "num_base_bdevs": 4, 00:13:21.002 "num_base_bdevs_discovered": 1, 00:13:21.002 "num_base_bdevs_operational": 4, 00:13:21.002 "base_bdevs_list": [ 00:13:21.002 { 00:13:21.002 "name": "pt1", 00:13:21.002 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:21.002 "is_configured": true, 00:13:21.002 "data_offset": 2048, 00:13:21.002 "data_size": 63488 00:13:21.002 }, 00:13:21.002 { 00:13:21.002 "name": null, 00:13:21.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:21.002 "is_configured": false, 00:13:21.002 "data_offset": 0, 00:13:21.002 "data_size": 63488 00:13:21.002 }, 00:13:21.002 { 00:13:21.002 "name": null, 00:13:21.002 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:21.002 "is_configured": false, 00:13:21.002 "data_offset": 2048, 00:13:21.002 "data_size": 63488 00:13:21.002 }, 00:13:21.002 { 00:13:21.002 "name": null, 00:13:21.002 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:21.002 "is_configured": false, 00:13:21.002 "data_offset": 2048, 00:13:21.002 "data_size": 63488 00:13:21.002 } 00:13:21.002 ] 00:13:21.002 }' 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.002 04:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.261 [2024-11-27 04:30:17.811631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:21.261 [2024-11-27 04:30:17.811853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.261 [2024-11-27 04:30:17.811905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:21.261 [2024-11-27 04:30:17.811945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.261 [2024-11-27 04:30:17.812563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.261 [2024-11-27 04:30:17.812643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:21.261 [2024-11-27 04:30:17.812793] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:21.261 [2024-11-27 04:30:17.812857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:21.261 pt2 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.261 [2024-11-27 04:30:17.823512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:21.261 [2024-11-27 04:30:17.823601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.261 [2024-11-27 04:30:17.823636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:21.261 [2024-11-27 04:30:17.823662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.261 [2024-11-27 04:30:17.824078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.261 [2024-11-27 04:30:17.824143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:21.261 [2024-11-27 04:30:17.824233] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:21.261 [2024-11-27 04:30:17.824294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:21.261 pt3 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.261 [2024-11-27 04:30:17.835468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:21.261 [2024-11-27 04:30:17.835509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.261 [2024-11-27 04:30:17.835525] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:21.261 [2024-11-27 04:30:17.835533] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.261 [2024-11-27 04:30:17.835906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.261 [2024-11-27 04:30:17.835922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:21.261 [2024-11-27 04:30:17.835979] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:21.261 [2024-11-27 04:30:17.836000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:21.261 [2024-11-27 04:30:17.836143] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:21.261 [2024-11-27 04:30:17.836152] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:21.261 [2024-11-27 04:30:17.836398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:21.261 [2024-11-27 04:30:17.836540] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:21.261 [2024-11-27 04:30:17.836554] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:21.261 [2024-11-27 04:30:17.836675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.261 pt4 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.261 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.520 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.520 04:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.520 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.520 04:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.520 04:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.520 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.520 "name": "raid_bdev1", 00:13:21.520 "uuid": "57382951-2ec6-40b5-87b0-d0e9c6ce1866", 00:13:21.520 "strip_size_kb": 64, 00:13:21.520 "state": "online", 00:13:21.520 "raid_level": "concat", 00:13:21.520 "superblock": true, 00:13:21.520 "num_base_bdevs": 4, 00:13:21.520 "num_base_bdevs_discovered": 4, 00:13:21.520 "num_base_bdevs_operational": 4, 00:13:21.520 "base_bdevs_list": [ 00:13:21.520 { 00:13:21.520 "name": "pt1", 00:13:21.520 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:21.520 "is_configured": true, 00:13:21.520 "data_offset": 2048, 00:13:21.520 "data_size": 63488 00:13:21.520 }, 00:13:21.520 { 00:13:21.520 "name": "pt2", 00:13:21.520 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:21.520 "is_configured": true, 00:13:21.520 "data_offset": 2048, 00:13:21.520 "data_size": 63488 00:13:21.520 }, 00:13:21.520 { 00:13:21.520 "name": "pt3", 00:13:21.520 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:21.520 "is_configured": true, 00:13:21.520 "data_offset": 2048, 00:13:21.520 "data_size": 63488 00:13:21.520 }, 00:13:21.520 { 00:13:21.520 "name": "pt4", 00:13:21.520 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:21.520 "is_configured": true, 00:13:21.520 "data_offset": 2048, 00:13:21.520 "data_size": 63488 00:13:21.520 } 00:13:21.520 ] 00:13:21.520 }' 00:13:21.520 04:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.520 04:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.779 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:21.779 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:21.779 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:21.779 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:21.779 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:21.779 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:21.779 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:21.779 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:21.779 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.779 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.779 [2024-11-27 04:30:18.235333] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:21.779 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.779 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:21.779 "name": "raid_bdev1", 00:13:21.779 "aliases": [ 00:13:21.779 "57382951-2ec6-40b5-87b0-d0e9c6ce1866" 00:13:21.779 ], 00:13:21.779 "product_name": "Raid Volume", 00:13:21.779 "block_size": 512, 00:13:21.779 "num_blocks": 253952, 00:13:21.779 "uuid": "57382951-2ec6-40b5-87b0-d0e9c6ce1866", 00:13:21.779 "assigned_rate_limits": { 00:13:21.779 "rw_ios_per_sec": 0, 00:13:21.779 "rw_mbytes_per_sec": 0, 00:13:21.779 "r_mbytes_per_sec": 0, 00:13:21.779 "w_mbytes_per_sec": 0 00:13:21.779 }, 00:13:21.779 "claimed": false, 00:13:21.779 "zoned": false, 00:13:21.779 "supported_io_types": { 00:13:21.779 "read": true, 00:13:21.779 "write": true, 00:13:21.779 "unmap": true, 00:13:21.779 "flush": true, 00:13:21.779 "reset": true, 00:13:21.779 "nvme_admin": false, 00:13:21.779 "nvme_io": false, 00:13:21.779 "nvme_io_md": false, 00:13:21.779 "write_zeroes": true, 00:13:21.779 "zcopy": false, 00:13:21.779 "get_zone_info": false, 00:13:21.779 "zone_management": false, 00:13:21.779 "zone_append": false, 00:13:21.779 "compare": false, 00:13:21.779 "compare_and_write": false, 00:13:21.779 "abort": false, 00:13:21.779 "seek_hole": false, 00:13:21.779 "seek_data": false, 00:13:21.779 "copy": false, 00:13:21.779 "nvme_iov_md": false 00:13:21.779 }, 00:13:21.779 "memory_domains": [ 00:13:21.779 { 00:13:21.779 "dma_device_id": "system", 00:13:21.779 "dma_device_type": 1 00:13:21.779 }, 00:13:21.779 { 00:13:21.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.779 "dma_device_type": 2 00:13:21.779 }, 00:13:21.779 { 00:13:21.779 "dma_device_id": "system", 00:13:21.779 "dma_device_type": 1 00:13:21.779 }, 00:13:21.779 { 00:13:21.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.779 "dma_device_type": 2 00:13:21.779 }, 00:13:21.779 { 00:13:21.779 "dma_device_id": "system", 00:13:21.779 "dma_device_type": 1 00:13:21.779 }, 00:13:21.779 { 00:13:21.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.779 "dma_device_type": 2 00:13:21.779 }, 00:13:21.779 { 00:13:21.779 "dma_device_id": "system", 00:13:21.779 "dma_device_type": 1 00:13:21.779 }, 00:13:21.779 { 00:13:21.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.779 "dma_device_type": 2 00:13:21.779 } 00:13:21.779 ], 00:13:21.779 "driver_specific": { 00:13:21.779 "raid": { 00:13:21.779 "uuid": "57382951-2ec6-40b5-87b0-d0e9c6ce1866", 00:13:21.779 "strip_size_kb": 64, 00:13:21.779 "state": "online", 00:13:21.779 "raid_level": "concat", 00:13:21.779 "superblock": true, 00:13:21.779 "num_base_bdevs": 4, 00:13:21.779 "num_base_bdevs_discovered": 4, 00:13:21.779 "num_base_bdevs_operational": 4, 00:13:21.779 "base_bdevs_list": [ 00:13:21.779 { 00:13:21.779 "name": "pt1", 00:13:21.779 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:21.779 "is_configured": true, 00:13:21.779 "data_offset": 2048, 00:13:21.779 "data_size": 63488 00:13:21.779 }, 00:13:21.779 { 00:13:21.779 "name": "pt2", 00:13:21.779 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:21.779 "is_configured": true, 00:13:21.779 "data_offset": 2048, 00:13:21.779 "data_size": 63488 00:13:21.779 }, 00:13:21.779 { 00:13:21.779 "name": "pt3", 00:13:21.779 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:21.779 "is_configured": true, 00:13:21.779 "data_offset": 2048, 00:13:21.779 "data_size": 63488 00:13:21.779 }, 00:13:21.779 { 00:13:21.779 "name": "pt4", 00:13:21.779 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:21.779 "is_configured": true, 00:13:21.779 "data_offset": 2048, 00:13:21.779 "data_size": 63488 00:13:21.779 } 00:13:21.779 ] 00:13:21.779 } 00:13:21.779 } 00:13:21.779 }' 00:13:21.779 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:21.779 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:21.779 pt2 00:13:21.779 pt3 00:13:21.779 pt4' 00:13:21.779 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.779 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:21.779 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:21.779 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:21.779 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.779 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.779 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.039 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:22.040 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:22.040 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.040 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.040 [2024-11-27 04:30:18.534723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:22.040 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.040 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 57382951-2ec6-40b5-87b0-d0e9c6ce1866 '!=' 57382951-2ec6-40b5-87b0-d0e9c6ce1866 ']' 00:13:22.040 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:13:22.040 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:22.040 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:22.040 04:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72913 00:13:22.040 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72913 ']' 00:13:22.040 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72913 00:13:22.040 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:22.040 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.040 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72913 00:13:22.040 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:22.040 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:22.040 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72913' 00:13:22.040 killing process with pid 72913 00:13:22.040 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72913 00:13:22.040 [2024-11-27 04:30:18.608030] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:22.040 04:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72913 00:13:22.040 [2024-11-27 04:30:18.608284] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.040 [2024-11-27 04:30:18.608398] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:22.040 [2024-11-27 04:30:18.608411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:22.607 [2024-11-27 04:30:19.050233] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:24.009 04:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:24.009 00:13:24.009 real 0m5.693s 00:13:24.009 user 0m7.912s 00:13:24.009 sys 0m1.081s 00:13:24.009 04:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.009 04:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.009 ************************************ 00:13:24.009 END TEST raid_superblock_test 00:13:24.010 ************************************ 00:13:24.010 04:30:20 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:13:24.010 04:30:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:24.010 04:30:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.010 04:30:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:24.010 ************************************ 00:13:24.010 START TEST raid_read_error_test 00:13:24.010 ************************************ 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3slMHugsGe 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73179 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73179 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73179 ']' 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:24.010 04:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.010 [2024-11-27 04:30:20.532245] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:13:24.010 [2024-11-27 04:30:20.532396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73179 ] 00:13:24.268 [2024-11-27 04:30:20.716984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.527 [2024-11-27 04:30:20.868517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.785 [2024-11-27 04:30:21.112998] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:24.785 [2024-11-27 04:30:21.113057] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:25.045 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:25.045 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:25.045 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:25.045 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.046 BaseBdev1_malloc 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.046 true 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.046 [2024-11-27 04:30:21.462263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:25.046 [2024-11-27 04:30:21.462343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.046 [2024-11-27 04:30:21.462367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:25.046 [2024-11-27 04:30:21.462380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.046 [2024-11-27 04:30:21.464980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.046 [2024-11-27 04:30:21.465024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:25.046 BaseBdev1 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.046 BaseBdev2_malloc 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.046 true 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.046 [2024-11-27 04:30:21.539435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:25.046 [2024-11-27 04:30:21.539514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.046 [2024-11-27 04:30:21.539535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:25.046 [2024-11-27 04:30:21.539547] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.046 [2024-11-27 04:30:21.542089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.046 [2024-11-27 04:30:21.542143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:25.046 BaseBdev2 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.046 BaseBdev3_malloc 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.046 true 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.046 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.046 [2024-11-27 04:30:21.625847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:25.046 [2024-11-27 04:30:21.625919] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.046 [2024-11-27 04:30:21.625941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:25.046 [2024-11-27 04:30:21.625954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.046 [2024-11-27 04:30:21.628665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.046 [2024-11-27 04:30:21.628709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:25.306 BaseBdev3 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.306 BaseBdev4_malloc 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.306 true 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.306 [2024-11-27 04:30:21.703863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:25.306 [2024-11-27 04:30:21.703941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.306 [2024-11-27 04:30:21.703976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:25.306 [2024-11-27 04:30:21.703989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.306 [2024-11-27 04:30:21.706539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.306 [2024-11-27 04:30:21.706583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:25.306 BaseBdev4 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.306 [2024-11-27 04:30:21.715903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:25.306 [2024-11-27 04:30:21.718013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:25.306 [2024-11-27 04:30:21.718105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:25.306 [2024-11-27 04:30:21.718170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:25.306 [2024-11-27 04:30:21.718396] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:25.306 [2024-11-27 04:30:21.718410] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:25.306 [2024-11-27 04:30:21.718669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:25.306 [2024-11-27 04:30:21.718852] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:25.306 [2024-11-27 04:30:21.718864] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:25.306 [2024-11-27 04:30:21.719024] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.306 "name": "raid_bdev1", 00:13:25.306 "uuid": "851fd3be-9f90-4f8f-9b6b-40ccde11b416", 00:13:25.306 "strip_size_kb": 64, 00:13:25.306 "state": "online", 00:13:25.306 "raid_level": "concat", 00:13:25.306 "superblock": true, 00:13:25.306 "num_base_bdevs": 4, 00:13:25.306 "num_base_bdevs_discovered": 4, 00:13:25.306 "num_base_bdevs_operational": 4, 00:13:25.306 "base_bdevs_list": [ 00:13:25.306 { 00:13:25.306 "name": "BaseBdev1", 00:13:25.306 "uuid": "8391e142-88db-5804-b62b-5a67da495e12", 00:13:25.306 "is_configured": true, 00:13:25.306 "data_offset": 2048, 00:13:25.306 "data_size": 63488 00:13:25.306 }, 00:13:25.306 { 00:13:25.306 "name": "BaseBdev2", 00:13:25.306 "uuid": "224c773b-4a29-5667-b81a-97b5684509cd", 00:13:25.306 "is_configured": true, 00:13:25.306 "data_offset": 2048, 00:13:25.306 "data_size": 63488 00:13:25.306 }, 00:13:25.306 { 00:13:25.306 "name": "BaseBdev3", 00:13:25.306 "uuid": "434291e0-e188-5bcb-84db-eb12a9e8f499", 00:13:25.306 "is_configured": true, 00:13:25.306 "data_offset": 2048, 00:13:25.306 "data_size": 63488 00:13:25.306 }, 00:13:25.306 { 00:13:25.306 "name": "BaseBdev4", 00:13:25.306 "uuid": "84f3481b-1a8a-50a7-9b9b-31e77d8d4886", 00:13:25.306 "is_configured": true, 00:13:25.306 "data_offset": 2048, 00:13:25.306 "data_size": 63488 00:13:25.306 } 00:13:25.306 ] 00:13:25.306 }' 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.306 04:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.874 04:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:25.874 04:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:25.874 [2024-11-27 04:30:22.320405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:26.812 04:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:26.812 04:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.812 04:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.812 04:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.812 04:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:26.812 04:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:26.812 04:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:26.812 04:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:26.812 04:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.812 04:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.812 04:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:26.812 04:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.812 04:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.812 04:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.812 04:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.812 04:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.812 04:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.812 04:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.812 04:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.812 04:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.812 04:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.812 04:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.812 04:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.812 "name": "raid_bdev1", 00:13:26.812 "uuid": "851fd3be-9f90-4f8f-9b6b-40ccde11b416", 00:13:26.812 "strip_size_kb": 64, 00:13:26.812 "state": "online", 00:13:26.812 "raid_level": "concat", 00:13:26.812 "superblock": true, 00:13:26.812 "num_base_bdevs": 4, 00:13:26.812 "num_base_bdevs_discovered": 4, 00:13:26.812 "num_base_bdevs_operational": 4, 00:13:26.812 "base_bdevs_list": [ 00:13:26.812 { 00:13:26.812 "name": "BaseBdev1", 00:13:26.812 "uuid": "8391e142-88db-5804-b62b-5a67da495e12", 00:13:26.812 "is_configured": true, 00:13:26.812 "data_offset": 2048, 00:13:26.812 "data_size": 63488 00:13:26.812 }, 00:13:26.812 { 00:13:26.812 "name": "BaseBdev2", 00:13:26.812 "uuid": "224c773b-4a29-5667-b81a-97b5684509cd", 00:13:26.812 "is_configured": true, 00:13:26.812 "data_offset": 2048, 00:13:26.812 "data_size": 63488 00:13:26.812 }, 00:13:26.812 { 00:13:26.812 "name": "BaseBdev3", 00:13:26.812 "uuid": "434291e0-e188-5bcb-84db-eb12a9e8f499", 00:13:26.812 "is_configured": true, 00:13:26.812 "data_offset": 2048, 00:13:26.812 "data_size": 63488 00:13:26.812 }, 00:13:26.812 { 00:13:26.812 "name": "BaseBdev4", 00:13:26.812 "uuid": "84f3481b-1a8a-50a7-9b9b-31e77d8d4886", 00:13:26.812 "is_configured": true, 00:13:26.812 "data_offset": 2048, 00:13:26.813 "data_size": 63488 00:13:26.813 } 00:13:26.813 ] 00:13:26.813 }' 00:13:26.813 04:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.813 04:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.440 04:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:27.440 04:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.440 04:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.440 [2024-11-27 04:30:23.726566] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:27.440 [2024-11-27 04:30:23.726623] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:27.440 [2024-11-27 04:30:23.729858] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:27.440 [2024-11-27 04:30:23.729932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.440 [2024-11-27 04:30:23.729987] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:27.440 [2024-11-27 04:30:23.730005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:27.440 { 00:13:27.440 "results": [ 00:13:27.440 { 00:13:27.440 "job": "raid_bdev1", 00:13:27.440 "core_mask": "0x1", 00:13:27.440 "workload": "randrw", 00:13:27.440 "percentage": 50, 00:13:27.440 "status": "finished", 00:13:27.440 "queue_depth": 1, 00:13:27.440 "io_size": 131072, 00:13:27.440 "runtime": 1.406452, 00:13:27.440 "iops": 12397.863560221038, 00:13:27.440 "mibps": 1549.7329450276297, 00:13:27.440 "io_failed": 1, 00:13:27.440 "io_timeout": 0, 00:13:27.440 "avg_latency_us": 113.0338992643181, 00:13:27.440 "min_latency_us": 28.39475982532751, 00:13:27.440 "max_latency_us": 1652.709170305677 00:13:27.440 } 00:13:27.440 ], 00:13:27.440 "core_count": 1 00:13:27.440 } 00:13:27.440 04:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.440 04:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73179 00:13:27.440 04:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73179 ']' 00:13:27.440 04:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73179 00:13:27.441 04:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:27.441 04:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:27.441 04:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73179 00:13:27.441 04:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:27.441 04:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:27.441 04:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73179' 00:13:27.441 killing process with pid 73179 00:13:27.441 04:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73179 00:13:27.441 [2024-11-27 04:30:23.765379] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:27.441 04:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73179 00:13:27.698 [2024-11-27 04:30:24.164892] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:29.076 04:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:29.076 04:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3slMHugsGe 00:13:29.076 04:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:29.076 04:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:13:29.076 04:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:29.076 04:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:29.076 04:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:29.076 04:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:13:29.076 00:13:29.076 real 0m5.256s 00:13:29.076 user 0m6.047s 00:13:29.076 sys 0m0.771s 00:13:29.076 04:30:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.076 04:30:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.076 ************************************ 00:13:29.076 END TEST raid_read_error_test 00:13:29.076 ************************************ 00:13:29.335 04:30:25 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:13:29.335 04:30:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:29.335 04:30:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.335 04:30:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:29.335 ************************************ 00:13:29.335 START TEST raid_write_error_test 00:13:29.335 ************************************ 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pyOKUwQGbK 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73329 00:13:29.335 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:29.336 04:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73329 00:13:29.336 04:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73329 ']' 00:13:29.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.336 04:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.336 04:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.336 04:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.336 04:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.336 04:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.336 [2024-11-27 04:30:25.856357] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:13:29.336 [2024-11-27 04:30:25.856517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73329 ] 00:13:29.594 [2024-11-27 04:30:26.045312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.853 [2024-11-27 04:30:26.210252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.112 [2024-11-27 04:30:26.489631] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:30.112 [2024-11-27 04:30:26.489814] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.372 BaseBdev1_malloc 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.372 true 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.372 [2024-11-27 04:30:26.821508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:30.372 [2024-11-27 04:30:26.821703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.372 [2024-11-27 04:30:26.821738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:30.372 [2024-11-27 04:30:26.821756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.372 [2024-11-27 04:30:26.824761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.372 [2024-11-27 04:30:26.824857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:30.372 BaseBdev1 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.372 BaseBdev2_malloc 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.372 true 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.372 [2024-11-27 04:30:26.907976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:30.372 [2024-11-27 04:30:26.908168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.372 [2024-11-27 04:30:26.908198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:30.372 [2024-11-27 04:30:26.908213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.372 [2024-11-27 04:30:26.911052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.372 [2024-11-27 04:30:26.911116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:30.372 BaseBdev2 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.372 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.643 BaseBdev3_malloc 00:13:30.643 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.643 04:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:30.643 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.643 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.643 true 00:13:30.643 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.643 04:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:30.643 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.643 04:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.643 [2024-11-27 04:30:27.001659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:30.643 [2024-11-27 04:30:27.001750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.643 [2024-11-27 04:30:27.001777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:30.643 [2024-11-27 04:30:27.001791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.643 [2024-11-27 04:30:27.004743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.643 [2024-11-27 04:30:27.004798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:30.643 BaseBdev3 00:13:30.643 04:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.643 04:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:30.643 04:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:30.643 04:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.643 04:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.643 BaseBdev4_malloc 00:13:30.643 04:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.643 04:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:30.643 04:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.643 04:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.643 true 00:13:30.643 04:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.643 04:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:30.643 04:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.643 04:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.643 [2024-11-27 04:30:27.085499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:30.643 [2024-11-27 04:30:27.085578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.643 [2024-11-27 04:30:27.085603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:30.643 [2024-11-27 04:30:27.085616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.643 [2024-11-27 04:30:27.088366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.643 [2024-11-27 04:30:27.088505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:30.643 BaseBdev4 00:13:30.643 04:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.643 04:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:30.643 04:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.643 04:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.643 [2024-11-27 04:30:27.097574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:30.643 [2024-11-27 04:30:27.099978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:30.643 [2024-11-27 04:30:27.100173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:30.643 [2024-11-27 04:30:27.100256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:30.643 [2024-11-27 04:30:27.100538] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:30.643 [2024-11-27 04:30:27.100556] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:30.643 [2024-11-27 04:30:27.100890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:30.643 [2024-11-27 04:30:27.101106] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:30.643 [2024-11-27 04:30:27.101120] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:30.643 [2024-11-27 04:30:27.101326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.643 04:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.643 04:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:30.643 04:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.644 04:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.644 04:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:30.644 04:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.644 04:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:30.644 04:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.644 04:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.644 04:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.644 04:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.644 04:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.644 04:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.644 04:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.644 04:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.644 04:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.644 04:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.644 "name": "raid_bdev1", 00:13:30.644 "uuid": "72def91c-9f28-4142-94e4-5ceeae954625", 00:13:30.644 "strip_size_kb": 64, 00:13:30.644 "state": "online", 00:13:30.644 "raid_level": "concat", 00:13:30.644 "superblock": true, 00:13:30.644 "num_base_bdevs": 4, 00:13:30.644 "num_base_bdevs_discovered": 4, 00:13:30.644 "num_base_bdevs_operational": 4, 00:13:30.644 "base_bdevs_list": [ 00:13:30.644 { 00:13:30.644 "name": "BaseBdev1", 00:13:30.644 "uuid": "8317d571-8317-55f5-8f32-8f6ecd88c90a", 00:13:30.644 "is_configured": true, 00:13:30.644 "data_offset": 2048, 00:13:30.644 "data_size": 63488 00:13:30.644 }, 00:13:30.644 { 00:13:30.644 "name": "BaseBdev2", 00:13:30.644 "uuid": "7b00c714-8545-50c0-8373-702286f1568b", 00:13:30.644 "is_configured": true, 00:13:30.644 "data_offset": 2048, 00:13:30.644 "data_size": 63488 00:13:30.644 }, 00:13:30.644 { 00:13:30.644 "name": "BaseBdev3", 00:13:30.644 "uuid": "7af9beee-b44c-5342-bb44-780a4aba0a70", 00:13:30.644 "is_configured": true, 00:13:30.644 "data_offset": 2048, 00:13:30.644 "data_size": 63488 00:13:30.644 }, 00:13:30.644 { 00:13:30.644 "name": "BaseBdev4", 00:13:30.644 "uuid": "958b948c-18b9-5aa9-9179-b48cdac58b52", 00:13:30.644 "is_configured": true, 00:13:30.644 "data_offset": 2048, 00:13:30.644 "data_size": 63488 00:13:30.644 } 00:13:30.644 ] 00:13:30.644 }' 00:13:30.644 04:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.644 04:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.223 04:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:31.223 04:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:31.223 [2024-11-27 04:30:27.674438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:32.163 04:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:32.163 04:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.163 04:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.163 04:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.163 04:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:32.163 04:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:32.163 04:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:32.163 04:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:32.163 04:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.163 04:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.163 04:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:32.163 04:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:32.163 04:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:32.163 04:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.163 04:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.163 04:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.163 04:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.163 04:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.163 04:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.163 04:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.163 04:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.163 04:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.163 04:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.163 "name": "raid_bdev1", 00:13:32.163 "uuid": "72def91c-9f28-4142-94e4-5ceeae954625", 00:13:32.163 "strip_size_kb": 64, 00:13:32.163 "state": "online", 00:13:32.163 "raid_level": "concat", 00:13:32.163 "superblock": true, 00:13:32.163 "num_base_bdevs": 4, 00:13:32.163 "num_base_bdevs_discovered": 4, 00:13:32.163 "num_base_bdevs_operational": 4, 00:13:32.163 "base_bdevs_list": [ 00:13:32.163 { 00:13:32.163 "name": "BaseBdev1", 00:13:32.163 "uuid": "8317d571-8317-55f5-8f32-8f6ecd88c90a", 00:13:32.163 "is_configured": true, 00:13:32.163 "data_offset": 2048, 00:13:32.163 "data_size": 63488 00:13:32.163 }, 00:13:32.163 { 00:13:32.163 "name": "BaseBdev2", 00:13:32.163 "uuid": "7b00c714-8545-50c0-8373-702286f1568b", 00:13:32.163 "is_configured": true, 00:13:32.163 "data_offset": 2048, 00:13:32.163 "data_size": 63488 00:13:32.163 }, 00:13:32.163 { 00:13:32.163 "name": "BaseBdev3", 00:13:32.163 "uuid": "7af9beee-b44c-5342-bb44-780a4aba0a70", 00:13:32.163 "is_configured": true, 00:13:32.163 "data_offset": 2048, 00:13:32.163 "data_size": 63488 00:13:32.163 }, 00:13:32.163 { 00:13:32.163 "name": "BaseBdev4", 00:13:32.163 "uuid": "958b948c-18b9-5aa9-9179-b48cdac58b52", 00:13:32.163 "is_configured": true, 00:13:32.163 "data_offset": 2048, 00:13:32.163 "data_size": 63488 00:13:32.163 } 00:13:32.163 ] 00:13:32.163 }' 00:13:32.163 04:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.163 04:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.732 04:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:32.732 04:30:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.732 04:30:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.732 [2024-11-27 04:30:29.017316] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:32.732 [2024-11-27 04:30:29.017373] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:32.732 [2024-11-27 04:30:29.020453] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:32.732 [2024-11-27 04:30:29.020543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.732 [2024-11-27 04:30:29.020599] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:32.732 [2024-11-27 04:30:29.020613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:32.732 { 00:13:32.732 "results": [ 00:13:32.732 { 00:13:32.732 "job": "raid_bdev1", 00:13:32.732 "core_mask": "0x1", 00:13:32.732 "workload": "randrw", 00:13:32.732 "percentage": 50, 00:13:32.732 "status": "finished", 00:13:32.732 "queue_depth": 1, 00:13:32.732 "io_size": 131072, 00:13:32.732 "runtime": 1.342889, 00:13:32.732 "iops": 11821.528063749125, 00:13:32.732 "mibps": 1477.6910079686406, 00:13:32.732 "io_failed": 1, 00:13:32.732 "io_timeout": 0, 00:13:32.732 "avg_latency_us": 118.53881907930567, 00:13:32.732 "min_latency_us": 27.053275109170304, 00:13:32.732 "max_latency_us": 1681.3275109170306 00:13:32.732 } 00:13:32.732 ], 00:13:32.732 "core_count": 1 00:13:32.732 } 00:13:32.732 04:30:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.732 04:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73329 00:13:32.732 04:30:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73329 ']' 00:13:32.732 04:30:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73329 00:13:32.732 04:30:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:32.732 04:30:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:32.732 04:30:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73329 00:13:32.732 killing process with pid 73329 00:13:32.732 04:30:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:32.732 04:30:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:32.732 04:30:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73329' 00:13:32.732 04:30:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73329 00:13:32.732 [2024-11-27 04:30:29.059677] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:32.732 04:30:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73329 00:13:32.991 [2024-11-27 04:30:29.436621] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:34.379 04:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pyOKUwQGbK 00:13:34.379 04:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:34.379 04:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:34.379 04:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:13:34.379 04:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:34.379 04:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:34.379 04:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:34.379 ************************************ 00:13:34.379 END TEST raid_write_error_test 00:13:34.379 ************************************ 00:13:34.379 04:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:13:34.379 00:13:34.379 real 0m5.109s 00:13:34.379 user 0m5.867s 00:13:34.379 sys 0m0.745s 00:13:34.379 04:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.379 04:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.379 04:30:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:34.379 04:30:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:13:34.379 04:30:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:34.379 04:30:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.379 04:30:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:34.379 ************************************ 00:13:34.379 START TEST raid_state_function_test 00:13:34.379 ************************************ 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73474 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73474' 00:13:34.379 Process raid pid: 73474 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73474 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73474 ']' 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:34.379 04:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.637 [2024-11-27 04:30:31.000991] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:13:34.637 [2024-11-27 04:30:31.001237] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.637 [2024-11-27 04:30:31.178249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.894 [2024-11-27 04:30:31.330022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.152 [2024-11-27 04:30:31.579899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:35.152 [2024-11-27 04:30:31.580067] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:35.410 04:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:35.410 04:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:35.410 04:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:35.410 04:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.410 04:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.410 [2024-11-27 04:30:31.902805] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:35.410 [2024-11-27 04:30:31.902991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:35.410 [2024-11-27 04:30:31.903026] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:35.410 [2024-11-27 04:30:31.903051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:35.410 [2024-11-27 04:30:31.903071] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:35.410 [2024-11-27 04:30:31.903121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:35.410 [2024-11-27 04:30:31.903148] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:35.410 [2024-11-27 04:30:31.903177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:35.410 04:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.410 04:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:35.410 04:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.410 04:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.410 04:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.410 04:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.410 04:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.410 04:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.410 04:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.410 04:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.410 04:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.410 04:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.410 04:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.410 04:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.410 04:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.410 04:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.410 04:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.410 "name": "Existed_Raid", 00:13:35.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.410 "strip_size_kb": 0, 00:13:35.410 "state": "configuring", 00:13:35.410 "raid_level": "raid1", 00:13:35.410 "superblock": false, 00:13:35.410 "num_base_bdevs": 4, 00:13:35.410 "num_base_bdevs_discovered": 0, 00:13:35.410 "num_base_bdevs_operational": 4, 00:13:35.410 "base_bdevs_list": [ 00:13:35.410 { 00:13:35.410 "name": "BaseBdev1", 00:13:35.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.410 "is_configured": false, 00:13:35.410 "data_offset": 0, 00:13:35.411 "data_size": 0 00:13:35.411 }, 00:13:35.411 { 00:13:35.411 "name": "BaseBdev2", 00:13:35.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.411 "is_configured": false, 00:13:35.411 "data_offset": 0, 00:13:35.411 "data_size": 0 00:13:35.411 }, 00:13:35.411 { 00:13:35.411 "name": "BaseBdev3", 00:13:35.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.411 "is_configured": false, 00:13:35.411 "data_offset": 0, 00:13:35.411 "data_size": 0 00:13:35.411 }, 00:13:35.411 { 00:13:35.411 "name": "BaseBdev4", 00:13:35.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.411 "is_configured": false, 00:13:35.411 "data_offset": 0, 00:13:35.411 "data_size": 0 00:13:35.411 } 00:13:35.411 ] 00:13:35.411 }' 00:13:35.411 04:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.411 04:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.977 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:35.977 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.977 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.977 [2024-11-27 04:30:32.413965] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:35.977 [2024-11-27 04:30:32.414025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:35.977 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.977 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:35.977 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.977 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.977 [2024-11-27 04:30:32.425876] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:35.977 [2024-11-27 04:30:32.425926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:35.977 [2024-11-27 04:30:32.425936] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:35.977 [2024-11-27 04:30:32.425946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:35.977 [2024-11-27 04:30:32.425952] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:35.977 [2024-11-27 04:30:32.425962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:35.977 [2024-11-27 04:30:32.425968] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:35.977 [2024-11-27 04:30:32.425977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:35.977 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.977 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:35.977 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.977 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.977 [2024-11-27 04:30:32.480710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:35.977 BaseBdev1 00:13:35.977 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.977 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:35.977 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:35.977 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:35.977 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:35.977 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:35.977 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:35.977 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:35.977 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.977 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.977 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.977 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:35.977 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.977 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.977 [ 00:13:35.977 { 00:13:35.977 "name": "BaseBdev1", 00:13:35.977 "aliases": [ 00:13:35.977 "9c951c8d-6078-4876-a455-84b0240e5a82" 00:13:35.977 ], 00:13:35.977 "product_name": "Malloc disk", 00:13:35.977 "block_size": 512, 00:13:35.977 "num_blocks": 65536, 00:13:35.977 "uuid": "9c951c8d-6078-4876-a455-84b0240e5a82", 00:13:35.977 "assigned_rate_limits": { 00:13:35.977 "rw_ios_per_sec": 0, 00:13:35.977 "rw_mbytes_per_sec": 0, 00:13:35.977 "r_mbytes_per_sec": 0, 00:13:35.977 "w_mbytes_per_sec": 0 00:13:35.977 }, 00:13:35.977 "claimed": true, 00:13:35.977 "claim_type": "exclusive_write", 00:13:35.977 "zoned": false, 00:13:35.977 "supported_io_types": { 00:13:35.977 "read": true, 00:13:35.977 "write": true, 00:13:35.977 "unmap": true, 00:13:35.977 "flush": true, 00:13:35.977 "reset": true, 00:13:35.977 "nvme_admin": false, 00:13:35.977 "nvme_io": false, 00:13:35.977 "nvme_io_md": false, 00:13:35.977 "write_zeroes": true, 00:13:35.977 "zcopy": true, 00:13:35.977 "get_zone_info": false, 00:13:35.977 "zone_management": false, 00:13:35.977 "zone_append": false, 00:13:35.978 "compare": false, 00:13:35.978 "compare_and_write": false, 00:13:35.978 "abort": true, 00:13:35.978 "seek_hole": false, 00:13:35.978 "seek_data": false, 00:13:35.978 "copy": true, 00:13:35.978 "nvme_iov_md": false 00:13:35.978 }, 00:13:35.978 "memory_domains": [ 00:13:35.978 { 00:13:35.978 "dma_device_id": "system", 00:13:35.978 "dma_device_type": 1 00:13:35.978 }, 00:13:35.978 { 00:13:35.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.978 "dma_device_type": 2 00:13:35.978 } 00:13:35.978 ], 00:13:35.978 "driver_specific": {} 00:13:35.978 } 00:13:35.978 ] 00:13:35.978 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.978 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:35.978 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:35.978 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.978 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.978 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.978 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.978 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.978 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.978 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.978 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.978 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.978 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.978 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.978 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.978 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.978 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.978 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.978 "name": "Existed_Raid", 00:13:35.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.978 "strip_size_kb": 0, 00:13:35.978 "state": "configuring", 00:13:35.978 "raid_level": "raid1", 00:13:35.978 "superblock": false, 00:13:35.978 "num_base_bdevs": 4, 00:13:35.978 "num_base_bdevs_discovered": 1, 00:13:35.978 "num_base_bdevs_operational": 4, 00:13:35.978 "base_bdevs_list": [ 00:13:35.978 { 00:13:35.978 "name": "BaseBdev1", 00:13:35.978 "uuid": "9c951c8d-6078-4876-a455-84b0240e5a82", 00:13:35.978 "is_configured": true, 00:13:35.978 "data_offset": 0, 00:13:35.978 "data_size": 65536 00:13:35.978 }, 00:13:35.978 { 00:13:35.978 "name": "BaseBdev2", 00:13:35.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.978 "is_configured": false, 00:13:35.978 "data_offset": 0, 00:13:35.978 "data_size": 0 00:13:35.978 }, 00:13:35.978 { 00:13:35.978 "name": "BaseBdev3", 00:13:35.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.978 "is_configured": false, 00:13:35.978 "data_offset": 0, 00:13:35.978 "data_size": 0 00:13:35.978 }, 00:13:35.978 { 00:13:35.978 "name": "BaseBdev4", 00:13:35.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.978 "is_configured": false, 00:13:35.978 "data_offset": 0, 00:13:35.978 "data_size": 0 00:13:35.978 } 00:13:35.978 ] 00:13:35.978 }' 00:13:35.978 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.978 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.545 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:36.545 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.545 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.545 [2024-11-27 04:30:32.967961] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:36.545 [2024-11-27 04:30:32.968148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:36.545 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.545 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:36.545 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.545 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.545 [2024-11-27 04:30:32.975991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:36.545 [2024-11-27 04:30:32.978410] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:36.545 [2024-11-27 04:30:32.978515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:36.545 [2024-11-27 04:30:32.978555] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:36.545 [2024-11-27 04:30:32.978588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:36.545 [2024-11-27 04:30:32.978614] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:36.545 [2024-11-27 04:30:32.978654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:36.545 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.545 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:36.545 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:36.545 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:36.545 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.545 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.545 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.545 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.545 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.545 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.545 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.545 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.545 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.545 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.545 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.545 04:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.545 04:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.545 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.545 04:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.545 "name": "Existed_Raid", 00:13:36.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.545 "strip_size_kb": 0, 00:13:36.545 "state": "configuring", 00:13:36.545 "raid_level": "raid1", 00:13:36.545 "superblock": false, 00:13:36.545 "num_base_bdevs": 4, 00:13:36.545 "num_base_bdevs_discovered": 1, 00:13:36.545 "num_base_bdevs_operational": 4, 00:13:36.545 "base_bdevs_list": [ 00:13:36.545 { 00:13:36.545 "name": "BaseBdev1", 00:13:36.545 "uuid": "9c951c8d-6078-4876-a455-84b0240e5a82", 00:13:36.545 "is_configured": true, 00:13:36.545 "data_offset": 0, 00:13:36.545 "data_size": 65536 00:13:36.545 }, 00:13:36.545 { 00:13:36.545 "name": "BaseBdev2", 00:13:36.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.545 "is_configured": false, 00:13:36.545 "data_offset": 0, 00:13:36.545 "data_size": 0 00:13:36.545 }, 00:13:36.545 { 00:13:36.545 "name": "BaseBdev3", 00:13:36.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.545 "is_configured": false, 00:13:36.545 "data_offset": 0, 00:13:36.545 "data_size": 0 00:13:36.545 }, 00:13:36.545 { 00:13:36.545 "name": "BaseBdev4", 00:13:36.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.545 "is_configured": false, 00:13:36.545 "data_offset": 0, 00:13:36.545 "data_size": 0 00:13:36.545 } 00:13:36.545 ] 00:13:36.545 }' 00:13:36.545 04:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.545 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.112 04:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:37.112 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.112 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.112 [2024-11-27 04:30:33.463621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:37.112 BaseBdev2 00:13:37.112 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.112 04:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:37.112 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:37.112 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:37.112 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:37.112 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:37.112 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:37.112 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:37.112 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.112 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.112 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.112 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:37.112 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.112 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.112 [ 00:13:37.113 { 00:13:37.113 "name": "BaseBdev2", 00:13:37.113 "aliases": [ 00:13:37.113 "2d1f2b67-ace0-4af6-9de1-468afdc1bee5" 00:13:37.113 ], 00:13:37.113 "product_name": "Malloc disk", 00:13:37.113 "block_size": 512, 00:13:37.113 "num_blocks": 65536, 00:13:37.113 "uuid": "2d1f2b67-ace0-4af6-9de1-468afdc1bee5", 00:13:37.113 "assigned_rate_limits": { 00:13:37.113 "rw_ios_per_sec": 0, 00:13:37.113 "rw_mbytes_per_sec": 0, 00:13:37.113 "r_mbytes_per_sec": 0, 00:13:37.113 "w_mbytes_per_sec": 0 00:13:37.113 }, 00:13:37.113 "claimed": true, 00:13:37.113 "claim_type": "exclusive_write", 00:13:37.113 "zoned": false, 00:13:37.113 "supported_io_types": { 00:13:37.113 "read": true, 00:13:37.113 "write": true, 00:13:37.113 "unmap": true, 00:13:37.113 "flush": true, 00:13:37.113 "reset": true, 00:13:37.113 "nvme_admin": false, 00:13:37.113 "nvme_io": false, 00:13:37.113 "nvme_io_md": false, 00:13:37.113 "write_zeroes": true, 00:13:37.113 "zcopy": true, 00:13:37.113 "get_zone_info": false, 00:13:37.113 "zone_management": false, 00:13:37.113 "zone_append": false, 00:13:37.113 "compare": false, 00:13:37.113 "compare_and_write": false, 00:13:37.113 "abort": true, 00:13:37.113 "seek_hole": false, 00:13:37.113 "seek_data": false, 00:13:37.113 "copy": true, 00:13:37.113 "nvme_iov_md": false 00:13:37.113 }, 00:13:37.113 "memory_domains": [ 00:13:37.113 { 00:13:37.113 "dma_device_id": "system", 00:13:37.113 "dma_device_type": 1 00:13:37.113 }, 00:13:37.113 { 00:13:37.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.113 "dma_device_type": 2 00:13:37.113 } 00:13:37.113 ], 00:13:37.113 "driver_specific": {} 00:13:37.113 } 00:13:37.113 ] 00:13:37.113 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.113 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:37.113 04:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:37.113 04:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:37.113 04:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:37.113 04:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:37.113 04:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:37.113 04:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.113 04:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.113 04:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:37.113 04:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.113 04:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.113 04:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.113 04:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.113 04:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.113 04:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.113 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.113 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.113 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.113 04:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.113 "name": "Existed_Raid", 00:13:37.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.113 "strip_size_kb": 0, 00:13:37.113 "state": "configuring", 00:13:37.113 "raid_level": "raid1", 00:13:37.113 "superblock": false, 00:13:37.113 "num_base_bdevs": 4, 00:13:37.113 "num_base_bdevs_discovered": 2, 00:13:37.113 "num_base_bdevs_operational": 4, 00:13:37.113 "base_bdevs_list": [ 00:13:37.113 { 00:13:37.113 "name": "BaseBdev1", 00:13:37.113 "uuid": "9c951c8d-6078-4876-a455-84b0240e5a82", 00:13:37.113 "is_configured": true, 00:13:37.113 "data_offset": 0, 00:13:37.113 "data_size": 65536 00:13:37.113 }, 00:13:37.113 { 00:13:37.113 "name": "BaseBdev2", 00:13:37.113 "uuid": "2d1f2b67-ace0-4af6-9de1-468afdc1bee5", 00:13:37.113 "is_configured": true, 00:13:37.113 "data_offset": 0, 00:13:37.113 "data_size": 65536 00:13:37.113 }, 00:13:37.113 { 00:13:37.113 "name": "BaseBdev3", 00:13:37.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.113 "is_configured": false, 00:13:37.113 "data_offset": 0, 00:13:37.113 "data_size": 0 00:13:37.113 }, 00:13:37.113 { 00:13:37.113 "name": "BaseBdev4", 00:13:37.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.113 "is_configured": false, 00:13:37.113 "data_offset": 0, 00:13:37.113 "data_size": 0 00:13:37.113 } 00:13:37.113 ] 00:13:37.113 }' 00:13:37.113 04:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.113 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.370 04:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:37.370 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.370 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.631 [2024-11-27 04:30:33.966719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:37.631 BaseBdev3 00:13:37.631 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.631 04:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:37.631 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:37.631 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:37.631 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:37.631 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:37.631 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:37.631 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:37.631 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.631 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.631 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.631 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:37.631 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.631 04:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.631 [ 00:13:37.631 { 00:13:37.631 "name": "BaseBdev3", 00:13:37.631 "aliases": [ 00:13:37.631 "aaecd7ae-9bb6-4e24-9202-cbbe45e1532b" 00:13:37.631 ], 00:13:37.631 "product_name": "Malloc disk", 00:13:37.631 "block_size": 512, 00:13:37.631 "num_blocks": 65536, 00:13:37.631 "uuid": "aaecd7ae-9bb6-4e24-9202-cbbe45e1532b", 00:13:37.631 "assigned_rate_limits": { 00:13:37.631 "rw_ios_per_sec": 0, 00:13:37.631 "rw_mbytes_per_sec": 0, 00:13:37.631 "r_mbytes_per_sec": 0, 00:13:37.631 "w_mbytes_per_sec": 0 00:13:37.631 }, 00:13:37.631 "claimed": true, 00:13:37.631 "claim_type": "exclusive_write", 00:13:37.631 "zoned": false, 00:13:37.631 "supported_io_types": { 00:13:37.631 "read": true, 00:13:37.631 "write": true, 00:13:37.631 "unmap": true, 00:13:37.631 "flush": true, 00:13:37.631 "reset": true, 00:13:37.631 "nvme_admin": false, 00:13:37.631 "nvme_io": false, 00:13:37.631 "nvme_io_md": false, 00:13:37.631 "write_zeroes": true, 00:13:37.631 "zcopy": true, 00:13:37.631 "get_zone_info": false, 00:13:37.631 "zone_management": false, 00:13:37.631 "zone_append": false, 00:13:37.631 "compare": false, 00:13:37.631 "compare_and_write": false, 00:13:37.631 "abort": true, 00:13:37.631 "seek_hole": false, 00:13:37.631 "seek_data": false, 00:13:37.631 "copy": true, 00:13:37.631 "nvme_iov_md": false 00:13:37.631 }, 00:13:37.631 "memory_domains": [ 00:13:37.631 { 00:13:37.631 "dma_device_id": "system", 00:13:37.631 "dma_device_type": 1 00:13:37.631 }, 00:13:37.631 { 00:13:37.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.631 "dma_device_type": 2 00:13:37.631 } 00:13:37.631 ], 00:13:37.631 "driver_specific": {} 00:13:37.631 } 00:13:37.631 ] 00:13:37.631 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.631 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:37.631 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:37.631 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:37.631 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:37.631 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:37.631 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:37.631 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.631 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.631 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:37.631 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.631 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.631 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.631 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.631 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.631 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.631 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.631 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.631 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.631 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.631 "name": "Existed_Raid", 00:13:37.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.631 "strip_size_kb": 0, 00:13:37.631 "state": "configuring", 00:13:37.631 "raid_level": "raid1", 00:13:37.631 "superblock": false, 00:13:37.631 "num_base_bdevs": 4, 00:13:37.631 "num_base_bdevs_discovered": 3, 00:13:37.631 "num_base_bdevs_operational": 4, 00:13:37.631 "base_bdevs_list": [ 00:13:37.631 { 00:13:37.631 "name": "BaseBdev1", 00:13:37.631 "uuid": "9c951c8d-6078-4876-a455-84b0240e5a82", 00:13:37.631 "is_configured": true, 00:13:37.631 "data_offset": 0, 00:13:37.631 "data_size": 65536 00:13:37.631 }, 00:13:37.631 { 00:13:37.631 "name": "BaseBdev2", 00:13:37.631 "uuid": "2d1f2b67-ace0-4af6-9de1-468afdc1bee5", 00:13:37.631 "is_configured": true, 00:13:37.631 "data_offset": 0, 00:13:37.631 "data_size": 65536 00:13:37.631 }, 00:13:37.631 { 00:13:37.631 "name": "BaseBdev3", 00:13:37.631 "uuid": "aaecd7ae-9bb6-4e24-9202-cbbe45e1532b", 00:13:37.631 "is_configured": true, 00:13:37.631 "data_offset": 0, 00:13:37.631 "data_size": 65536 00:13:37.631 }, 00:13:37.631 { 00:13:37.631 "name": "BaseBdev4", 00:13:37.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.631 "is_configured": false, 00:13:37.631 "data_offset": 0, 00:13:37.631 "data_size": 0 00:13:37.631 } 00:13:37.631 ] 00:13:37.631 }' 00:13:37.631 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.632 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.890 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:37.890 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.890 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.149 [2024-11-27 04:30:34.485200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:38.149 [2024-11-27 04:30:34.485298] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:38.149 [2024-11-27 04:30:34.485311] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:38.149 [2024-11-27 04:30:34.485638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:38.149 [2024-11-27 04:30:34.485835] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:38.149 [2024-11-27 04:30:34.485861] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:38.149 [2024-11-27 04:30:34.486175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.149 BaseBdev4 00:13:38.149 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.149 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:38.149 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:38.149 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:38.149 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:38.149 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:38.149 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:38.149 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:38.149 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.149 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.149 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.149 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:38.149 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.149 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.149 [ 00:13:38.149 { 00:13:38.149 "name": "BaseBdev4", 00:13:38.149 "aliases": [ 00:13:38.149 "cb97df5c-f35c-4a74-add9-d3daccf1f9bf" 00:13:38.149 ], 00:13:38.149 "product_name": "Malloc disk", 00:13:38.149 "block_size": 512, 00:13:38.149 "num_blocks": 65536, 00:13:38.149 "uuid": "cb97df5c-f35c-4a74-add9-d3daccf1f9bf", 00:13:38.149 "assigned_rate_limits": { 00:13:38.149 "rw_ios_per_sec": 0, 00:13:38.149 "rw_mbytes_per_sec": 0, 00:13:38.149 "r_mbytes_per_sec": 0, 00:13:38.149 "w_mbytes_per_sec": 0 00:13:38.149 }, 00:13:38.149 "claimed": true, 00:13:38.149 "claim_type": "exclusive_write", 00:13:38.149 "zoned": false, 00:13:38.149 "supported_io_types": { 00:13:38.149 "read": true, 00:13:38.149 "write": true, 00:13:38.149 "unmap": true, 00:13:38.149 "flush": true, 00:13:38.149 "reset": true, 00:13:38.149 "nvme_admin": false, 00:13:38.149 "nvme_io": false, 00:13:38.149 "nvme_io_md": false, 00:13:38.149 "write_zeroes": true, 00:13:38.149 "zcopy": true, 00:13:38.149 "get_zone_info": false, 00:13:38.149 "zone_management": false, 00:13:38.149 "zone_append": false, 00:13:38.149 "compare": false, 00:13:38.149 "compare_and_write": false, 00:13:38.149 "abort": true, 00:13:38.149 "seek_hole": false, 00:13:38.149 "seek_data": false, 00:13:38.149 "copy": true, 00:13:38.149 "nvme_iov_md": false 00:13:38.149 }, 00:13:38.149 "memory_domains": [ 00:13:38.149 { 00:13:38.149 "dma_device_id": "system", 00:13:38.149 "dma_device_type": 1 00:13:38.149 }, 00:13:38.149 { 00:13:38.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.149 "dma_device_type": 2 00:13:38.149 } 00:13:38.149 ], 00:13:38.150 "driver_specific": {} 00:13:38.150 } 00:13:38.150 ] 00:13:38.150 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.150 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:38.150 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:38.150 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:38.150 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:38.150 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.150 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.150 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.150 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.150 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:38.150 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.150 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.150 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.150 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.150 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.150 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.150 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.150 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.150 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.150 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.150 "name": "Existed_Raid", 00:13:38.150 "uuid": "a32acfa0-67d5-47f1-9835-6d5123ef2f9a", 00:13:38.150 "strip_size_kb": 0, 00:13:38.150 "state": "online", 00:13:38.150 "raid_level": "raid1", 00:13:38.150 "superblock": false, 00:13:38.150 "num_base_bdevs": 4, 00:13:38.150 "num_base_bdevs_discovered": 4, 00:13:38.150 "num_base_bdevs_operational": 4, 00:13:38.150 "base_bdevs_list": [ 00:13:38.150 { 00:13:38.150 "name": "BaseBdev1", 00:13:38.150 "uuid": "9c951c8d-6078-4876-a455-84b0240e5a82", 00:13:38.150 "is_configured": true, 00:13:38.150 "data_offset": 0, 00:13:38.150 "data_size": 65536 00:13:38.150 }, 00:13:38.150 { 00:13:38.150 "name": "BaseBdev2", 00:13:38.150 "uuid": "2d1f2b67-ace0-4af6-9de1-468afdc1bee5", 00:13:38.150 "is_configured": true, 00:13:38.150 "data_offset": 0, 00:13:38.150 "data_size": 65536 00:13:38.150 }, 00:13:38.150 { 00:13:38.150 "name": "BaseBdev3", 00:13:38.150 "uuid": "aaecd7ae-9bb6-4e24-9202-cbbe45e1532b", 00:13:38.150 "is_configured": true, 00:13:38.150 "data_offset": 0, 00:13:38.150 "data_size": 65536 00:13:38.150 }, 00:13:38.150 { 00:13:38.150 "name": "BaseBdev4", 00:13:38.150 "uuid": "cb97df5c-f35c-4a74-add9-d3daccf1f9bf", 00:13:38.150 "is_configured": true, 00:13:38.150 "data_offset": 0, 00:13:38.150 "data_size": 65536 00:13:38.150 } 00:13:38.150 ] 00:13:38.150 }' 00:13:38.150 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.150 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.408 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:38.408 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:38.408 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:38.408 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:38.408 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:38.408 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:38.408 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:38.408 04:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:38.408 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.408 04:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.408 [2024-11-27 04:30:34.988802] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:38.667 04:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.667 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:38.667 "name": "Existed_Raid", 00:13:38.667 "aliases": [ 00:13:38.667 "a32acfa0-67d5-47f1-9835-6d5123ef2f9a" 00:13:38.667 ], 00:13:38.667 "product_name": "Raid Volume", 00:13:38.667 "block_size": 512, 00:13:38.667 "num_blocks": 65536, 00:13:38.667 "uuid": "a32acfa0-67d5-47f1-9835-6d5123ef2f9a", 00:13:38.667 "assigned_rate_limits": { 00:13:38.667 "rw_ios_per_sec": 0, 00:13:38.667 "rw_mbytes_per_sec": 0, 00:13:38.667 "r_mbytes_per_sec": 0, 00:13:38.667 "w_mbytes_per_sec": 0 00:13:38.667 }, 00:13:38.667 "claimed": false, 00:13:38.667 "zoned": false, 00:13:38.667 "supported_io_types": { 00:13:38.667 "read": true, 00:13:38.668 "write": true, 00:13:38.668 "unmap": false, 00:13:38.668 "flush": false, 00:13:38.668 "reset": true, 00:13:38.668 "nvme_admin": false, 00:13:38.668 "nvme_io": false, 00:13:38.668 "nvme_io_md": false, 00:13:38.668 "write_zeroes": true, 00:13:38.668 "zcopy": false, 00:13:38.668 "get_zone_info": false, 00:13:38.668 "zone_management": false, 00:13:38.668 "zone_append": false, 00:13:38.668 "compare": false, 00:13:38.668 "compare_and_write": false, 00:13:38.668 "abort": false, 00:13:38.668 "seek_hole": false, 00:13:38.668 "seek_data": false, 00:13:38.668 "copy": false, 00:13:38.668 "nvme_iov_md": false 00:13:38.668 }, 00:13:38.668 "memory_domains": [ 00:13:38.668 { 00:13:38.668 "dma_device_id": "system", 00:13:38.668 "dma_device_type": 1 00:13:38.668 }, 00:13:38.668 { 00:13:38.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.668 "dma_device_type": 2 00:13:38.668 }, 00:13:38.668 { 00:13:38.668 "dma_device_id": "system", 00:13:38.668 "dma_device_type": 1 00:13:38.668 }, 00:13:38.668 { 00:13:38.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.668 "dma_device_type": 2 00:13:38.668 }, 00:13:38.668 { 00:13:38.668 "dma_device_id": "system", 00:13:38.668 "dma_device_type": 1 00:13:38.668 }, 00:13:38.668 { 00:13:38.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.668 "dma_device_type": 2 00:13:38.668 }, 00:13:38.668 { 00:13:38.668 "dma_device_id": "system", 00:13:38.668 "dma_device_type": 1 00:13:38.668 }, 00:13:38.668 { 00:13:38.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.668 "dma_device_type": 2 00:13:38.668 } 00:13:38.668 ], 00:13:38.668 "driver_specific": { 00:13:38.668 "raid": { 00:13:38.668 "uuid": "a32acfa0-67d5-47f1-9835-6d5123ef2f9a", 00:13:38.668 "strip_size_kb": 0, 00:13:38.668 "state": "online", 00:13:38.668 "raid_level": "raid1", 00:13:38.668 "superblock": false, 00:13:38.668 "num_base_bdevs": 4, 00:13:38.668 "num_base_bdevs_discovered": 4, 00:13:38.668 "num_base_bdevs_operational": 4, 00:13:38.668 "base_bdevs_list": [ 00:13:38.668 { 00:13:38.668 "name": "BaseBdev1", 00:13:38.668 "uuid": "9c951c8d-6078-4876-a455-84b0240e5a82", 00:13:38.668 "is_configured": true, 00:13:38.668 "data_offset": 0, 00:13:38.668 "data_size": 65536 00:13:38.668 }, 00:13:38.668 { 00:13:38.668 "name": "BaseBdev2", 00:13:38.668 "uuid": "2d1f2b67-ace0-4af6-9de1-468afdc1bee5", 00:13:38.668 "is_configured": true, 00:13:38.668 "data_offset": 0, 00:13:38.668 "data_size": 65536 00:13:38.668 }, 00:13:38.668 { 00:13:38.668 "name": "BaseBdev3", 00:13:38.668 "uuid": "aaecd7ae-9bb6-4e24-9202-cbbe45e1532b", 00:13:38.668 "is_configured": true, 00:13:38.668 "data_offset": 0, 00:13:38.668 "data_size": 65536 00:13:38.668 }, 00:13:38.668 { 00:13:38.668 "name": "BaseBdev4", 00:13:38.668 "uuid": "cb97df5c-f35c-4a74-add9-d3daccf1f9bf", 00:13:38.668 "is_configured": true, 00:13:38.668 "data_offset": 0, 00:13:38.668 "data_size": 65536 00:13:38.668 } 00:13:38.668 ] 00:13:38.668 } 00:13:38.668 } 00:13:38.668 }' 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:38.668 BaseBdev2 00:13:38.668 BaseBdev3 00:13:38.668 BaseBdev4' 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.668 04:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.928 [2024-11-27 04:30:35.280031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.928 "name": "Existed_Raid", 00:13:38.928 "uuid": "a32acfa0-67d5-47f1-9835-6d5123ef2f9a", 00:13:38.928 "strip_size_kb": 0, 00:13:38.928 "state": "online", 00:13:38.928 "raid_level": "raid1", 00:13:38.928 "superblock": false, 00:13:38.928 "num_base_bdevs": 4, 00:13:38.928 "num_base_bdevs_discovered": 3, 00:13:38.928 "num_base_bdevs_operational": 3, 00:13:38.928 "base_bdevs_list": [ 00:13:38.928 { 00:13:38.928 "name": null, 00:13:38.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.928 "is_configured": false, 00:13:38.928 "data_offset": 0, 00:13:38.928 "data_size": 65536 00:13:38.928 }, 00:13:38.928 { 00:13:38.928 "name": "BaseBdev2", 00:13:38.928 "uuid": "2d1f2b67-ace0-4af6-9de1-468afdc1bee5", 00:13:38.928 "is_configured": true, 00:13:38.928 "data_offset": 0, 00:13:38.928 "data_size": 65536 00:13:38.928 }, 00:13:38.928 { 00:13:38.928 "name": "BaseBdev3", 00:13:38.928 "uuid": "aaecd7ae-9bb6-4e24-9202-cbbe45e1532b", 00:13:38.928 "is_configured": true, 00:13:38.928 "data_offset": 0, 00:13:38.928 "data_size": 65536 00:13:38.928 }, 00:13:38.928 { 00:13:38.928 "name": "BaseBdev4", 00:13:38.928 "uuid": "cb97df5c-f35c-4a74-add9-d3daccf1f9bf", 00:13:38.928 "is_configured": true, 00:13:38.928 "data_offset": 0, 00:13:38.928 "data_size": 65536 00:13:38.928 } 00:13:38.928 ] 00:13:38.928 }' 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.928 04:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.496 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:39.496 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:39.496 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:39.496 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.496 04:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.496 04:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.496 04:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.496 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:39.496 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:39.496 04:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:39.496 04:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.497 04:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.497 [2024-11-27 04:30:35.907487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:39.497 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.497 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:39.497 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:39.497 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.497 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.497 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.497 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:39.497 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.497 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:39.497 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:39.497 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:39.497 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.497 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.497 [2024-11-27 04:30:36.078945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:39.756 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.756 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:39.756 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:39.756 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.756 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.756 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:39.756 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.756 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.756 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:39.756 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:39.756 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:39.756 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.756 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.756 [2024-11-27 04:30:36.246111] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:39.756 [2024-11-27 04:30:36.246328] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:40.026 [2024-11-27 04:30:36.364133] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:40.026 [2024-11-27 04:30:36.364218] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:40.026 [2024-11-27 04:30:36.364234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.026 BaseBdev2 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.026 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.027 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:40.027 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.027 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.027 [ 00:13:40.027 { 00:13:40.027 "name": "BaseBdev2", 00:13:40.027 "aliases": [ 00:13:40.027 "2c93edbe-0a3e-4b60-b19a-0c8fb650df94" 00:13:40.027 ], 00:13:40.027 "product_name": "Malloc disk", 00:13:40.027 "block_size": 512, 00:13:40.027 "num_blocks": 65536, 00:13:40.027 "uuid": "2c93edbe-0a3e-4b60-b19a-0c8fb650df94", 00:13:40.027 "assigned_rate_limits": { 00:13:40.027 "rw_ios_per_sec": 0, 00:13:40.027 "rw_mbytes_per_sec": 0, 00:13:40.027 "r_mbytes_per_sec": 0, 00:13:40.027 "w_mbytes_per_sec": 0 00:13:40.027 }, 00:13:40.027 "claimed": false, 00:13:40.027 "zoned": false, 00:13:40.027 "supported_io_types": { 00:13:40.027 "read": true, 00:13:40.027 "write": true, 00:13:40.027 "unmap": true, 00:13:40.027 "flush": true, 00:13:40.027 "reset": true, 00:13:40.027 "nvme_admin": false, 00:13:40.027 "nvme_io": false, 00:13:40.027 "nvme_io_md": false, 00:13:40.027 "write_zeroes": true, 00:13:40.027 "zcopy": true, 00:13:40.027 "get_zone_info": false, 00:13:40.027 "zone_management": false, 00:13:40.027 "zone_append": false, 00:13:40.027 "compare": false, 00:13:40.027 "compare_and_write": false, 00:13:40.027 "abort": true, 00:13:40.027 "seek_hole": false, 00:13:40.027 "seek_data": false, 00:13:40.027 "copy": true, 00:13:40.027 "nvme_iov_md": false 00:13:40.027 }, 00:13:40.027 "memory_domains": [ 00:13:40.027 { 00:13:40.027 "dma_device_id": "system", 00:13:40.027 "dma_device_type": 1 00:13:40.027 }, 00:13:40.027 { 00:13:40.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.027 "dma_device_type": 2 00:13:40.027 } 00:13:40.027 ], 00:13:40.027 "driver_specific": {} 00:13:40.027 } 00:13:40.027 ] 00:13:40.027 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.027 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:40.027 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:40.027 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:40.027 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:40.027 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.027 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.027 BaseBdev3 00:13:40.027 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.027 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:40.027 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:40.027 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:40.027 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:40.027 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:40.027 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:40.027 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:40.027 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.027 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.027 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.027 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:40.027 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.027 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.027 [ 00:13:40.027 { 00:13:40.027 "name": "BaseBdev3", 00:13:40.027 "aliases": [ 00:13:40.027 "8a7d8a7c-f26b-4cef-905e-b4dbb7165464" 00:13:40.027 ], 00:13:40.027 "product_name": "Malloc disk", 00:13:40.027 "block_size": 512, 00:13:40.027 "num_blocks": 65536, 00:13:40.027 "uuid": "8a7d8a7c-f26b-4cef-905e-b4dbb7165464", 00:13:40.027 "assigned_rate_limits": { 00:13:40.027 "rw_ios_per_sec": 0, 00:13:40.027 "rw_mbytes_per_sec": 0, 00:13:40.027 "r_mbytes_per_sec": 0, 00:13:40.027 "w_mbytes_per_sec": 0 00:13:40.027 }, 00:13:40.027 "claimed": false, 00:13:40.027 "zoned": false, 00:13:40.027 "supported_io_types": { 00:13:40.027 "read": true, 00:13:40.027 "write": true, 00:13:40.027 "unmap": true, 00:13:40.027 "flush": true, 00:13:40.027 "reset": true, 00:13:40.027 "nvme_admin": false, 00:13:40.027 "nvme_io": false, 00:13:40.027 "nvme_io_md": false, 00:13:40.027 "write_zeroes": true, 00:13:40.027 "zcopy": true, 00:13:40.027 "get_zone_info": false, 00:13:40.027 "zone_management": false, 00:13:40.027 "zone_append": false, 00:13:40.027 "compare": false, 00:13:40.027 "compare_and_write": false, 00:13:40.027 "abort": true, 00:13:40.027 "seek_hole": false, 00:13:40.027 "seek_data": false, 00:13:40.027 "copy": true, 00:13:40.027 "nvme_iov_md": false 00:13:40.027 }, 00:13:40.027 "memory_domains": [ 00:13:40.027 { 00:13:40.027 "dma_device_id": "system", 00:13:40.027 "dma_device_type": 1 00:13:40.027 }, 00:13:40.027 { 00:13:40.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.027 "dma_device_type": 2 00:13:40.027 } 00:13:40.027 ], 00:13:40.027 "driver_specific": {} 00:13:40.027 } 00:13:40.027 ] 00:13:40.028 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.028 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:40.028 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:40.028 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:40.028 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:40.028 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.028 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.287 BaseBdev4 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.287 [ 00:13:40.287 { 00:13:40.287 "name": "BaseBdev4", 00:13:40.287 "aliases": [ 00:13:40.287 "90d850ff-0942-46a2-843c-4912c33352be" 00:13:40.287 ], 00:13:40.287 "product_name": "Malloc disk", 00:13:40.287 "block_size": 512, 00:13:40.287 "num_blocks": 65536, 00:13:40.287 "uuid": "90d850ff-0942-46a2-843c-4912c33352be", 00:13:40.287 "assigned_rate_limits": { 00:13:40.287 "rw_ios_per_sec": 0, 00:13:40.287 "rw_mbytes_per_sec": 0, 00:13:40.287 "r_mbytes_per_sec": 0, 00:13:40.287 "w_mbytes_per_sec": 0 00:13:40.287 }, 00:13:40.287 "claimed": false, 00:13:40.287 "zoned": false, 00:13:40.287 "supported_io_types": { 00:13:40.287 "read": true, 00:13:40.287 "write": true, 00:13:40.287 "unmap": true, 00:13:40.287 "flush": true, 00:13:40.287 "reset": true, 00:13:40.287 "nvme_admin": false, 00:13:40.287 "nvme_io": false, 00:13:40.287 "nvme_io_md": false, 00:13:40.287 "write_zeroes": true, 00:13:40.287 "zcopy": true, 00:13:40.287 "get_zone_info": false, 00:13:40.287 "zone_management": false, 00:13:40.287 "zone_append": false, 00:13:40.287 "compare": false, 00:13:40.287 "compare_and_write": false, 00:13:40.287 "abort": true, 00:13:40.287 "seek_hole": false, 00:13:40.287 "seek_data": false, 00:13:40.287 "copy": true, 00:13:40.287 "nvme_iov_md": false 00:13:40.287 }, 00:13:40.287 "memory_domains": [ 00:13:40.287 { 00:13:40.287 "dma_device_id": "system", 00:13:40.287 "dma_device_type": 1 00:13:40.287 }, 00:13:40.287 { 00:13:40.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.287 "dma_device_type": 2 00:13:40.287 } 00:13:40.287 ], 00:13:40.287 "driver_specific": {} 00:13:40.287 } 00:13:40.287 ] 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.287 [2024-11-27 04:30:36.697238] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:40.287 [2024-11-27 04:30:36.697313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:40.287 [2024-11-27 04:30:36.697343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:40.287 [2024-11-27 04:30:36.699803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:40.287 [2024-11-27 04:30:36.699867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.287 "name": "Existed_Raid", 00:13:40.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.287 "strip_size_kb": 0, 00:13:40.287 "state": "configuring", 00:13:40.287 "raid_level": "raid1", 00:13:40.287 "superblock": false, 00:13:40.287 "num_base_bdevs": 4, 00:13:40.287 "num_base_bdevs_discovered": 3, 00:13:40.287 "num_base_bdevs_operational": 4, 00:13:40.287 "base_bdevs_list": [ 00:13:40.287 { 00:13:40.287 "name": "BaseBdev1", 00:13:40.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.287 "is_configured": false, 00:13:40.287 "data_offset": 0, 00:13:40.287 "data_size": 0 00:13:40.287 }, 00:13:40.287 { 00:13:40.287 "name": "BaseBdev2", 00:13:40.287 "uuid": "2c93edbe-0a3e-4b60-b19a-0c8fb650df94", 00:13:40.287 "is_configured": true, 00:13:40.287 "data_offset": 0, 00:13:40.287 "data_size": 65536 00:13:40.287 }, 00:13:40.287 { 00:13:40.287 "name": "BaseBdev3", 00:13:40.287 "uuid": "8a7d8a7c-f26b-4cef-905e-b4dbb7165464", 00:13:40.287 "is_configured": true, 00:13:40.287 "data_offset": 0, 00:13:40.287 "data_size": 65536 00:13:40.287 }, 00:13:40.287 { 00:13:40.287 "name": "BaseBdev4", 00:13:40.287 "uuid": "90d850ff-0942-46a2-843c-4912c33352be", 00:13:40.287 "is_configured": true, 00:13:40.287 "data_offset": 0, 00:13:40.287 "data_size": 65536 00:13:40.287 } 00:13:40.287 ] 00:13:40.287 }' 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.287 04:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.855 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:40.855 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.855 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.855 [2024-11-27 04:30:37.140515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:40.855 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.855 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:40.855 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.855 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.855 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.855 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.855 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:40.855 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.855 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.855 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.855 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.855 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.855 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.855 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.855 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.855 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.855 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.855 "name": "Existed_Raid", 00:13:40.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.855 "strip_size_kb": 0, 00:13:40.855 "state": "configuring", 00:13:40.855 "raid_level": "raid1", 00:13:40.855 "superblock": false, 00:13:40.855 "num_base_bdevs": 4, 00:13:40.855 "num_base_bdevs_discovered": 2, 00:13:40.855 "num_base_bdevs_operational": 4, 00:13:40.855 "base_bdevs_list": [ 00:13:40.855 { 00:13:40.855 "name": "BaseBdev1", 00:13:40.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.856 "is_configured": false, 00:13:40.856 "data_offset": 0, 00:13:40.856 "data_size": 0 00:13:40.856 }, 00:13:40.856 { 00:13:40.856 "name": null, 00:13:40.856 "uuid": "2c93edbe-0a3e-4b60-b19a-0c8fb650df94", 00:13:40.856 "is_configured": false, 00:13:40.856 "data_offset": 0, 00:13:40.856 "data_size": 65536 00:13:40.856 }, 00:13:40.856 { 00:13:40.856 "name": "BaseBdev3", 00:13:40.856 "uuid": "8a7d8a7c-f26b-4cef-905e-b4dbb7165464", 00:13:40.856 "is_configured": true, 00:13:40.856 "data_offset": 0, 00:13:40.856 "data_size": 65536 00:13:40.856 }, 00:13:40.856 { 00:13:40.856 "name": "BaseBdev4", 00:13:40.856 "uuid": "90d850ff-0942-46a2-843c-4912c33352be", 00:13:40.856 "is_configured": true, 00:13:40.856 "data_offset": 0, 00:13:40.856 "data_size": 65536 00:13:40.856 } 00:13:40.856 ] 00:13:40.856 }' 00:13:40.856 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.856 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.114 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:41.114 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.114 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.114 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.114 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.114 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:41.114 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:41.114 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.114 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.114 [2024-11-27 04:30:37.691594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:41.114 BaseBdev1 00:13:41.114 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.114 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:41.114 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:41.114 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:41.114 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:41.114 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:41.114 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:41.114 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:41.114 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.114 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.373 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.373 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:41.373 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.373 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.373 [ 00:13:41.373 { 00:13:41.373 "name": "BaseBdev1", 00:13:41.373 "aliases": [ 00:13:41.374 "0e557271-0cc1-46d3-80c0-5d9e5dfc415b" 00:13:41.374 ], 00:13:41.374 "product_name": "Malloc disk", 00:13:41.374 "block_size": 512, 00:13:41.374 "num_blocks": 65536, 00:13:41.374 "uuid": "0e557271-0cc1-46d3-80c0-5d9e5dfc415b", 00:13:41.374 "assigned_rate_limits": { 00:13:41.374 "rw_ios_per_sec": 0, 00:13:41.374 "rw_mbytes_per_sec": 0, 00:13:41.374 "r_mbytes_per_sec": 0, 00:13:41.374 "w_mbytes_per_sec": 0 00:13:41.374 }, 00:13:41.374 "claimed": true, 00:13:41.374 "claim_type": "exclusive_write", 00:13:41.374 "zoned": false, 00:13:41.374 "supported_io_types": { 00:13:41.374 "read": true, 00:13:41.374 "write": true, 00:13:41.374 "unmap": true, 00:13:41.374 "flush": true, 00:13:41.374 "reset": true, 00:13:41.374 "nvme_admin": false, 00:13:41.374 "nvme_io": false, 00:13:41.374 "nvme_io_md": false, 00:13:41.374 "write_zeroes": true, 00:13:41.374 "zcopy": true, 00:13:41.374 "get_zone_info": false, 00:13:41.374 "zone_management": false, 00:13:41.374 "zone_append": false, 00:13:41.374 "compare": false, 00:13:41.374 "compare_and_write": false, 00:13:41.374 "abort": true, 00:13:41.374 "seek_hole": false, 00:13:41.374 "seek_data": false, 00:13:41.374 "copy": true, 00:13:41.374 "nvme_iov_md": false 00:13:41.374 }, 00:13:41.374 "memory_domains": [ 00:13:41.374 { 00:13:41.374 "dma_device_id": "system", 00:13:41.374 "dma_device_type": 1 00:13:41.374 }, 00:13:41.374 { 00:13:41.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.374 "dma_device_type": 2 00:13:41.374 } 00:13:41.374 ], 00:13:41.374 "driver_specific": {} 00:13:41.374 } 00:13:41.374 ] 00:13:41.374 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.374 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:41.374 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:41.374 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.374 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.374 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.374 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.374 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.374 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.374 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.374 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.374 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.374 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.374 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.374 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.374 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.374 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.374 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.374 "name": "Existed_Raid", 00:13:41.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.374 "strip_size_kb": 0, 00:13:41.374 "state": "configuring", 00:13:41.374 "raid_level": "raid1", 00:13:41.374 "superblock": false, 00:13:41.374 "num_base_bdevs": 4, 00:13:41.374 "num_base_bdevs_discovered": 3, 00:13:41.374 "num_base_bdevs_operational": 4, 00:13:41.374 "base_bdevs_list": [ 00:13:41.374 { 00:13:41.374 "name": "BaseBdev1", 00:13:41.374 "uuid": "0e557271-0cc1-46d3-80c0-5d9e5dfc415b", 00:13:41.374 "is_configured": true, 00:13:41.374 "data_offset": 0, 00:13:41.374 "data_size": 65536 00:13:41.374 }, 00:13:41.374 { 00:13:41.374 "name": null, 00:13:41.374 "uuid": "2c93edbe-0a3e-4b60-b19a-0c8fb650df94", 00:13:41.374 "is_configured": false, 00:13:41.374 "data_offset": 0, 00:13:41.374 "data_size": 65536 00:13:41.374 }, 00:13:41.374 { 00:13:41.374 "name": "BaseBdev3", 00:13:41.374 "uuid": "8a7d8a7c-f26b-4cef-905e-b4dbb7165464", 00:13:41.374 "is_configured": true, 00:13:41.374 "data_offset": 0, 00:13:41.374 "data_size": 65536 00:13:41.374 }, 00:13:41.374 { 00:13:41.374 "name": "BaseBdev4", 00:13:41.374 "uuid": "90d850ff-0942-46a2-843c-4912c33352be", 00:13:41.374 "is_configured": true, 00:13:41.374 "data_offset": 0, 00:13:41.374 "data_size": 65536 00:13:41.374 } 00:13:41.374 ] 00:13:41.374 }' 00:13:41.374 04:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.374 04:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.633 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.633 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:41.633 04:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.633 04:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.633 04:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.945 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:41.945 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:41.945 04:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.945 04:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.945 [2024-11-27 04:30:38.246886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:41.945 04:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.945 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:41.945 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.945 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.945 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.945 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.945 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.945 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.945 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.945 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.945 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.945 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.945 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.945 04:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.945 04:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.945 04:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.945 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.945 "name": "Existed_Raid", 00:13:41.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.945 "strip_size_kb": 0, 00:13:41.945 "state": "configuring", 00:13:41.945 "raid_level": "raid1", 00:13:41.945 "superblock": false, 00:13:41.945 "num_base_bdevs": 4, 00:13:41.945 "num_base_bdevs_discovered": 2, 00:13:41.945 "num_base_bdevs_operational": 4, 00:13:41.945 "base_bdevs_list": [ 00:13:41.945 { 00:13:41.945 "name": "BaseBdev1", 00:13:41.945 "uuid": "0e557271-0cc1-46d3-80c0-5d9e5dfc415b", 00:13:41.945 "is_configured": true, 00:13:41.945 "data_offset": 0, 00:13:41.945 "data_size": 65536 00:13:41.945 }, 00:13:41.945 { 00:13:41.945 "name": null, 00:13:41.945 "uuid": "2c93edbe-0a3e-4b60-b19a-0c8fb650df94", 00:13:41.945 "is_configured": false, 00:13:41.945 "data_offset": 0, 00:13:41.945 "data_size": 65536 00:13:41.945 }, 00:13:41.945 { 00:13:41.945 "name": null, 00:13:41.945 "uuid": "8a7d8a7c-f26b-4cef-905e-b4dbb7165464", 00:13:41.945 "is_configured": false, 00:13:41.945 "data_offset": 0, 00:13:41.945 "data_size": 65536 00:13:41.945 }, 00:13:41.945 { 00:13:41.945 "name": "BaseBdev4", 00:13:41.945 "uuid": "90d850ff-0942-46a2-843c-4912c33352be", 00:13:41.945 "is_configured": true, 00:13:41.945 "data_offset": 0, 00:13:41.945 "data_size": 65536 00:13:41.945 } 00:13:41.945 ] 00:13:41.945 }' 00:13:41.945 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.945 04:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.206 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.206 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:42.206 04:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.206 04:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.206 04:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.206 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:42.206 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:42.206 04:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.206 04:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.206 [2024-11-27 04:30:38.734068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:42.206 04:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.206 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:42.206 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.206 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:42.206 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.206 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.206 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:42.206 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.206 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.206 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.206 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.206 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.206 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.206 04:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.206 04:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.206 04:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.206 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.206 "name": "Existed_Raid", 00:13:42.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.206 "strip_size_kb": 0, 00:13:42.206 "state": "configuring", 00:13:42.206 "raid_level": "raid1", 00:13:42.206 "superblock": false, 00:13:42.206 "num_base_bdevs": 4, 00:13:42.206 "num_base_bdevs_discovered": 3, 00:13:42.206 "num_base_bdevs_operational": 4, 00:13:42.206 "base_bdevs_list": [ 00:13:42.206 { 00:13:42.206 "name": "BaseBdev1", 00:13:42.206 "uuid": "0e557271-0cc1-46d3-80c0-5d9e5dfc415b", 00:13:42.206 "is_configured": true, 00:13:42.206 "data_offset": 0, 00:13:42.206 "data_size": 65536 00:13:42.206 }, 00:13:42.206 { 00:13:42.206 "name": null, 00:13:42.206 "uuid": "2c93edbe-0a3e-4b60-b19a-0c8fb650df94", 00:13:42.206 "is_configured": false, 00:13:42.206 "data_offset": 0, 00:13:42.206 "data_size": 65536 00:13:42.206 }, 00:13:42.206 { 00:13:42.206 "name": "BaseBdev3", 00:13:42.206 "uuid": "8a7d8a7c-f26b-4cef-905e-b4dbb7165464", 00:13:42.206 "is_configured": true, 00:13:42.206 "data_offset": 0, 00:13:42.206 "data_size": 65536 00:13:42.206 }, 00:13:42.206 { 00:13:42.206 "name": "BaseBdev4", 00:13:42.206 "uuid": "90d850ff-0942-46a2-843c-4912c33352be", 00:13:42.206 "is_configured": true, 00:13:42.206 "data_offset": 0, 00:13:42.206 "data_size": 65536 00:13:42.206 } 00:13:42.206 ] 00:13:42.206 }' 00:13:42.207 04:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.207 04:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.795 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.795 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:42.795 04:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.795 04:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.795 04:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.795 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:42.795 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:42.795 04:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.795 04:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.795 [2024-11-27 04:30:39.281179] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:43.064 04:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.064 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:43.064 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.064 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.064 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.064 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.064 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:43.064 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.064 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.064 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.064 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.064 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.064 04:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.064 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.064 04:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.064 04:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.064 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.064 "name": "Existed_Raid", 00:13:43.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.064 "strip_size_kb": 0, 00:13:43.064 "state": "configuring", 00:13:43.064 "raid_level": "raid1", 00:13:43.064 "superblock": false, 00:13:43.064 "num_base_bdevs": 4, 00:13:43.064 "num_base_bdevs_discovered": 2, 00:13:43.064 "num_base_bdevs_operational": 4, 00:13:43.064 "base_bdevs_list": [ 00:13:43.064 { 00:13:43.064 "name": null, 00:13:43.064 "uuid": "0e557271-0cc1-46d3-80c0-5d9e5dfc415b", 00:13:43.064 "is_configured": false, 00:13:43.064 "data_offset": 0, 00:13:43.064 "data_size": 65536 00:13:43.064 }, 00:13:43.064 { 00:13:43.064 "name": null, 00:13:43.064 "uuid": "2c93edbe-0a3e-4b60-b19a-0c8fb650df94", 00:13:43.064 "is_configured": false, 00:13:43.064 "data_offset": 0, 00:13:43.064 "data_size": 65536 00:13:43.064 }, 00:13:43.064 { 00:13:43.064 "name": "BaseBdev3", 00:13:43.064 "uuid": "8a7d8a7c-f26b-4cef-905e-b4dbb7165464", 00:13:43.064 "is_configured": true, 00:13:43.064 "data_offset": 0, 00:13:43.064 "data_size": 65536 00:13:43.064 }, 00:13:43.064 { 00:13:43.064 "name": "BaseBdev4", 00:13:43.064 "uuid": "90d850ff-0942-46a2-843c-4912c33352be", 00:13:43.064 "is_configured": true, 00:13:43.064 "data_offset": 0, 00:13:43.064 "data_size": 65536 00:13:43.064 } 00:13:43.064 ] 00:13:43.064 }' 00:13:43.064 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.064 04:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.323 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.323 04:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.323 04:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.323 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:43.323 04:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.583 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:43.583 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:43.583 04:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.583 04:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.583 [2024-11-27 04:30:39.928340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:43.583 04:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.583 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:43.583 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.583 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.583 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.583 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.583 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:43.583 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.583 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.583 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.583 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.583 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.583 04:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.583 04:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.583 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.583 04:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.583 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.583 "name": "Existed_Raid", 00:13:43.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.583 "strip_size_kb": 0, 00:13:43.583 "state": "configuring", 00:13:43.583 "raid_level": "raid1", 00:13:43.583 "superblock": false, 00:13:43.583 "num_base_bdevs": 4, 00:13:43.583 "num_base_bdevs_discovered": 3, 00:13:43.583 "num_base_bdevs_operational": 4, 00:13:43.583 "base_bdevs_list": [ 00:13:43.583 { 00:13:43.583 "name": null, 00:13:43.583 "uuid": "0e557271-0cc1-46d3-80c0-5d9e5dfc415b", 00:13:43.583 "is_configured": false, 00:13:43.583 "data_offset": 0, 00:13:43.583 "data_size": 65536 00:13:43.583 }, 00:13:43.583 { 00:13:43.583 "name": "BaseBdev2", 00:13:43.583 "uuid": "2c93edbe-0a3e-4b60-b19a-0c8fb650df94", 00:13:43.583 "is_configured": true, 00:13:43.583 "data_offset": 0, 00:13:43.583 "data_size": 65536 00:13:43.583 }, 00:13:43.583 { 00:13:43.583 "name": "BaseBdev3", 00:13:43.583 "uuid": "8a7d8a7c-f26b-4cef-905e-b4dbb7165464", 00:13:43.583 "is_configured": true, 00:13:43.583 "data_offset": 0, 00:13:43.583 "data_size": 65536 00:13:43.583 }, 00:13:43.583 { 00:13:43.583 "name": "BaseBdev4", 00:13:43.583 "uuid": "90d850ff-0942-46a2-843c-4912c33352be", 00:13:43.583 "is_configured": true, 00:13:43.583 "data_offset": 0, 00:13:43.583 "data_size": 65536 00:13:43.584 } 00:13:43.584 ] 00:13:43.584 }' 00:13:43.584 04:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.584 04:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.843 04:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.843 04:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:43.843 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.843 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0e557271-0cc1-46d3-80c0-5d9e5dfc415b 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.103 [2024-11-27 04:30:40.534251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:44.103 [2024-11-27 04:30:40.534428] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:44.103 [2024-11-27 04:30:40.534460] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:44.103 [2024-11-27 04:30:40.534834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:44.103 [2024-11-27 04:30:40.535073] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:44.103 [2024-11-27 04:30:40.535130] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:44.103 [2024-11-27 04:30:40.535528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.103 NewBaseBdev 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.103 [ 00:13:44.103 { 00:13:44.103 "name": "NewBaseBdev", 00:13:44.103 "aliases": [ 00:13:44.103 "0e557271-0cc1-46d3-80c0-5d9e5dfc415b" 00:13:44.103 ], 00:13:44.103 "product_name": "Malloc disk", 00:13:44.103 "block_size": 512, 00:13:44.103 "num_blocks": 65536, 00:13:44.103 "uuid": "0e557271-0cc1-46d3-80c0-5d9e5dfc415b", 00:13:44.103 "assigned_rate_limits": { 00:13:44.103 "rw_ios_per_sec": 0, 00:13:44.103 "rw_mbytes_per_sec": 0, 00:13:44.103 "r_mbytes_per_sec": 0, 00:13:44.103 "w_mbytes_per_sec": 0 00:13:44.103 }, 00:13:44.103 "claimed": true, 00:13:44.103 "claim_type": "exclusive_write", 00:13:44.103 "zoned": false, 00:13:44.103 "supported_io_types": { 00:13:44.103 "read": true, 00:13:44.103 "write": true, 00:13:44.103 "unmap": true, 00:13:44.103 "flush": true, 00:13:44.103 "reset": true, 00:13:44.103 "nvme_admin": false, 00:13:44.103 "nvme_io": false, 00:13:44.103 "nvme_io_md": false, 00:13:44.103 "write_zeroes": true, 00:13:44.103 "zcopy": true, 00:13:44.103 "get_zone_info": false, 00:13:44.103 "zone_management": false, 00:13:44.103 "zone_append": false, 00:13:44.103 "compare": false, 00:13:44.103 "compare_and_write": false, 00:13:44.103 "abort": true, 00:13:44.103 "seek_hole": false, 00:13:44.103 "seek_data": false, 00:13:44.103 "copy": true, 00:13:44.103 "nvme_iov_md": false 00:13:44.103 }, 00:13:44.103 "memory_domains": [ 00:13:44.103 { 00:13:44.103 "dma_device_id": "system", 00:13:44.103 "dma_device_type": 1 00:13:44.103 }, 00:13:44.103 { 00:13:44.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.103 "dma_device_type": 2 00:13:44.103 } 00:13:44.103 ], 00:13:44.103 "driver_specific": {} 00:13:44.103 } 00:13:44.103 ] 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.103 04:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.103 "name": "Existed_Raid", 00:13:44.103 "uuid": "a8b308df-985e-4472-8847-415bc3b91978", 00:13:44.103 "strip_size_kb": 0, 00:13:44.103 "state": "online", 00:13:44.103 "raid_level": "raid1", 00:13:44.103 "superblock": false, 00:13:44.103 "num_base_bdevs": 4, 00:13:44.103 "num_base_bdevs_discovered": 4, 00:13:44.103 "num_base_bdevs_operational": 4, 00:13:44.103 "base_bdevs_list": [ 00:13:44.103 { 00:13:44.103 "name": "NewBaseBdev", 00:13:44.103 "uuid": "0e557271-0cc1-46d3-80c0-5d9e5dfc415b", 00:13:44.103 "is_configured": true, 00:13:44.103 "data_offset": 0, 00:13:44.103 "data_size": 65536 00:13:44.103 }, 00:13:44.103 { 00:13:44.103 "name": "BaseBdev2", 00:13:44.103 "uuid": "2c93edbe-0a3e-4b60-b19a-0c8fb650df94", 00:13:44.103 "is_configured": true, 00:13:44.103 "data_offset": 0, 00:13:44.103 "data_size": 65536 00:13:44.103 }, 00:13:44.103 { 00:13:44.103 "name": "BaseBdev3", 00:13:44.103 "uuid": "8a7d8a7c-f26b-4cef-905e-b4dbb7165464", 00:13:44.103 "is_configured": true, 00:13:44.103 "data_offset": 0, 00:13:44.103 "data_size": 65536 00:13:44.103 }, 00:13:44.103 { 00:13:44.104 "name": "BaseBdev4", 00:13:44.104 "uuid": "90d850ff-0942-46a2-843c-4912c33352be", 00:13:44.104 "is_configured": true, 00:13:44.104 "data_offset": 0, 00:13:44.104 "data_size": 65536 00:13:44.104 } 00:13:44.104 ] 00:13:44.104 }' 00:13:44.104 04:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.104 04:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.673 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:44.673 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:44.673 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:44.673 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:44.673 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:44.673 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:44.673 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:44.673 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:44.673 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.673 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.673 [2024-11-27 04:30:41.029915] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:44.673 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.673 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:44.673 "name": "Existed_Raid", 00:13:44.673 "aliases": [ 00:13:44.673 "a8b308df-985e-4472-8847-415bc3b91978" 00:13:44.673 ], 00:13:44.673 "product_name": "Raid Volume", 00:13:44.673 "block_size": 512, 00:13:44.673 "num_blocks": 65536, 00:13:44.673 "uuid": "a8b308df-985e-4472-8847-415bc3b91978", 00:13:44.673 "assigned_rate_limits": { 00:13:44.673 "rw_ios_per_sec": 0, 00:13:44.673 "rw_mbytes_per_sec": 0, 00:13:44.673 "r_mbytes_per_sec": 0, 00:13:44.673 "w_mbytes_per_sec": 0 00:13:44.673 }, 00:13:44.673 "claimed": false, 00:13:44.673 "zoned": false, 00:13:44.673 "supported_io_types": { 00:13:44.673 "read": true, 00:13:44.673 "write": true, 00:13:44.673 "unmap": false, 00:13:44.673 "flush": false, 00:13:44.673 "reset": true, 00:13:44.673 "nvme_admin": false, 00:13:44.674 "nvme_io": false, 00:13:44.674 "nvme_io_md": false, 00:13:44.674 "write_zeroes": true, 00:13:44.674 "zcopy": false, 00:13:44.674 "get_zone_info": false, 00:13:44.674 "zone_management": false, 00:13:44.674 "zone_append": false, 00:13:44.674 "compare": false, 00:13:44.674 "compare_and_write": false, 00:13:44.674 "abort": false, 00:13:44.674 "seek_hole": false, 00:13:44.674 "seek_data": false, 00:13:44.674 "copy": false, 00:13:44.674 "nvme_iov_md": false 00:13:44.674 }, 00:13:44.674 "memory_domains": [ 00:13:44.674 { 00:13:44.674 "dma_device_id": "system", 00:13:44.674 "dma_device_type": 1 00:13:44.674 }, 00:13:44.674 { 00:13:44.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.674 "dma_device_type": 2 00:13:44.674 }, 00:13:44.674 { 00:13:44.674 "dma_device_id": "system", 00:13:44.674 "dma_device_type": 1 00:13:44.674 }, 00:13:44.674 { 00:13:44.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.674 "dma_device_type": 2 00:13:44.674 }, 00:13:44.674 { 00:13:44.674 "dma_device_id": "system", 00:13:44.674 "dma_device_type": 1 00:13:44.674 }, 00:13:44.674 { 00:13:44.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.674 "dma_device_type": 2 00:13:44.674 }, 00:13:44.674 { 00:13:44.674 "dma_device_id": "system", 00:13:44.674 "dma_device_type": 1 00:13:44.674 }, 00:13:44.674 { 00:13:44.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.674 "dma_device_type": 2 00:13:44.674 } 00:13:44.674 ], 00:13:44.674 "driver_specific": { 00:13:44.674 "raid": { 00:13:44.674 "uuid": "a8b308df-985e-4472-8847-415bc3b91978", 00:13:44.674 "strip_size_kb": 0, 00:13:44.674 "state": "online", 00:13:44.674 "raid_level": "raid1", 00:13:44.674 "superblock": false, 00:13:44.674 "num_base_bdevs": 4, 00:13:44.674 "num_base_bdevs_discovered": 4, 00:13:44.674 "num_base_bdevs_operational": 4, 00:13:44.674 "base_bdevs_list": [ 00:13:44.674 { 00:13:44.674 "name": "NewBaseBdev", 00:13:44.674 "uuid": "0e557271-0cc1-46d3-80c0-5d9e5dfc415b", 00:13:44.674 "is_configured": true, 00:13:44.674 "data_offset": 0, 00:13:44.674 "data_size": 65536 00:13:44.674 }, 00:13:44.674 { 00:13:44.674 "name": "BaseBdev2", 00:13:44.674 "uuid": "2c93edbe-0a3e-4b60-b19a-0c8fb650df94", 00:13:44.674 "is_configured": true, 00:13:44.674 "data_offset": 0, 00:13:44.674 "data_size": 65536 00:13:44.674 }, 00:13:44.674 { 00:13:44.674 "name": "BaseBdev3", 00:13:44.674 "uuid": "8a7d8a7c-f26b-4cef-905e-b4dbb7165464", 00:13:44.674 "is_configured": true, 00:13:44.674 "data_offset": 0, 00:13:44.674 "data_size": 65536 00:13:44.674 }, 00:13:44.674 { 00:13:44.674 "name": "BaseBdev4", 00:13:44.674 "uuid": "90d850ff-0942-46a2-843c-4912c33352be", 00:13:44.674 "is_configured": true, 00:13:44.674 "data_offset": 0, 00:13:44.674 "data_size": 65536 00:13:44.674 } 00:13:44.674 ] 00:13:44.674 } 00:13:44.674 } 00:13:44.674 }' 00:13:44.674 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:44.674 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:44.674 BaseBdev2 00:13:44.674 BaseBdev3 00:13:44.674 BaseBdev4' 00:13:44.674 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:44.674 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:44.674 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:44.674 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:44.674 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.674 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.674 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:44.674 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.674 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:44.674 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:44.674 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:44.674 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:44.674 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:44.674 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.674 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.674 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.674 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:44.674 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:44.674 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:44.674 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:44.674 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:44.674 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.674 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.935 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.935 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:44.935 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:44.935 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:44.935 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:44.935 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.935 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:44.935 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.935 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.935 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:44.935 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:44.935 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:44.935 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.935 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.935 [2024-11-27 04:30:41.337050] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:44.935 [2024-11-27 04:30:41.337165] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:44.935 [2024-11-27 04:30:41.337316] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.935 [2024-11-27 04:30:41.337733] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.935 [2024-11-27 04:30:41.337799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:44.935 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.935 04:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73474 00:13:44.935 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73474 ']' 00:13:44.935 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73474 00:13:44.935 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:44.935 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.935 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73474 00:13:44.935 killing process with pid 73474 00:13:44.935 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:44.935 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:44.935 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73474' 00:13:44.935 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73474 00:13:44.935 [2024-11-27 04:30:41.380999] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:44.935 04:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73474 00:13:45.504 [2024-11-27 04:30:41.850590] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:46.884 00:13:46.884 real 0m12.240s 00:13:46.884 user 0m19.109s 00:13:46.884 sys 0m2.247s 00:13:46.884 ************************************ 00:13:46.884 END TEST raid_state_function_test 00:13:46.884 ************************************ 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.884 04:30:43 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:13:46.884 04:30:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:46.884 04:30:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.884 04:30:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:46.884 ************************************ 00:13:46.884 START TEST raid_state_function_test_sb 00:13:46.884 ************************************ 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74151 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74151' 00:13:46.884 Process raid pid: 74151 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74151 00:13:46.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74151 ']' 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:46.884 04:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.884 [2024-11-27 04:30:43.324183] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:13:46.884 [2024-11-27 04:30:43.324316] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.144 [2024-11-27 04:30:43.503736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.144 [2024-11-27 04:30:43.653518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.403 [2024-11-27 04:30:43.899982] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.403 [2024-11-27 04:30:43.900050] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.663 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:47.663 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:47.663 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:47.663 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.663 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.663 [2024-11-27 04:30:44.192550] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:47.663 [2024-11-27 04:30:44.192624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:47.663 [2024-11-27 04:30:44.192638] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:47.663 [2024-11-27 04:30:44.192650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:47.663 [2024-11-27 04:30:44.192657] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:47.663 [2024-11-27 04:30:44.192668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:47.663 [2024-11-27 04:30:44.192675] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:47.663 [2024-11-27 04:30:44.192686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:47.663 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.663 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:47.663 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.663 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.663 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.663 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.663 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.663 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.663 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.663 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.663 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.663 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.663 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.663 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.663 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.663 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.923 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.923 "name": "Existed_Raid", 00:13:47.923 "uuid": "871eb021-2022-47ff-9cd8-24ff318fb6af", 00:13:47.923 "strip_size_kb": 0, 00:13:47.923 "state": "configuring", 00:13:47.923 "raid_level": "raid1", 00:13:47.923 "superblock": true, 00:13:47.923 "num_base_bdevs": 4, 00:13:47.923 "num_base_bdevs_discovered": 0, 00:13:47.923 "num_base_bdevs_operational": 4, 00:13:47.923 "base_bdevs_list": [ 00:13:47.923 { 00:13:47.923 "name": "BaseBdev1", 00:13:47.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.923 "is_configured": false, 00:13:47.923 "data_offset": 0, 00:13:47.923 "data_size": 0 00:13:47.923 }, 00:13:47.923 { 00:13:47.923 "name": "BaseBdev2", 00:13:47.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.923 "is_configured": false, 00:13:47.923 "data_offset": 0, 00:13:47.923 "data_size": 0 00:13:47.923 }, 00:13:47.923 { 00:13:47.923 "name": "BaseBdev3", 00:13:47.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.923 "is_configured": false, 00:13:47.923 "data_offset": 0, 00:13:47.923 "data_size": 0 00:13:47.923 }, 00:13:47.923 { 00:13:47.923 "name": "BaseBdev4", 00:13:47.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.923 "is_configured": false, 00:13:47.923 "data_offset": 0, 00:13:47.923 "data_size": 0 00:13:47.923 } 00:13:47.923 ] 00:13:47.923 }' 00:13:47.923 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.923 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.183 [2024-11-27 04:30:44.623891] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:48.183 [2024-11-27 04:30:44.624018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.183 [2024-11-27 04:30:44.635820] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:48.183 [2024-11-27 04:30:44.635909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:48.183 [2024-11-27 04:30:44.635941] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:48.183 [2024-11-27 04:30:44.635969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:48.183 [2024-11-27 04:30:44.635990] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:48.183 [2024-11-27 04:30:44.636015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:48.183 [2024-11-27 04:30:44.636035] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:48.183 [2024-11-27 04:30:44.636060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.183 [2024-11-27 04:30:44.691770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:48.183 BaseBdev1 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.183 [ 00:13:48.183 { 00:13:48.183 "name": "BaseBdev1", 00:13:48.183 "aliases": [ 00:13:48.183 "9b83d9f5-1224-481d-872e-606f85781684" 00:13:48.183 ], 00:13:48.183 "product_name": "Malloc disk", 00:13:48.183 "block_size": 512, 00:13:48.183 "num_blocks": 65536, 00:13:48.183 "uuid": "9b83d9f5-1224-481d-872e-606f85781684", 00:13:48.183 "assigned_rate_limits": { 00:13:48.183 "rw_ios_per_sec": 0, 00:13:48.183 "rw_mbytes_per_sec": 0, 00:13:48.183 "r_mbytes_per_sec": 0, 00:13:48.183 "w_mbytes_per_sec": 0 00:13:48.183 }, 00:13:48.183 "claimed": true, 00:13:48.183 "claim_type": "exclusive_write", 00:13:48.183 "zoned": false, 00:13:48.183 "supported_io_types": { 00:13:48.183 "read": true, 00:13:48.183 "write": true, 00:13:48.183 "unmap": true, 00:13:48.183 "flush": true, 00:13:48.183 "reset": true, 00:13:48.183 "nvme_admin": false, 00:13:48.183 "nvme_io": false, 00:13:48.183 "nvme_io_md": false, 00:13:48.183 "write_zeroes": true, 00:13:48.183 "zcopy": true, 00:13:48.183 "get_zone_info": false, 00:13:48.183 "zone_management": false, 00:13:48.183 "zone_append": false, 00:13:48.183 "compare": false, 00:13:48.183 "compare_and_write": false, 00:13:48.183 "abort": true, 00:13:48.183 "seek_hole": false, 00:13:48.183 "seek_data": false, 00:13:48.183 "copy": true, 00:13:48.183 "nvme_iov_md": false 00:13:48.183 }, 00:13:48.183 "memory_domains": [ 00:13:48.183 { 00:13:48.183 "dma_device_id": "system", 00:13:48.183 "dma_device_type": 1 00:13:48.183 }, 00:13:48.183 { 00:13:48.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.183 "dma_device_type": 2 00:13:48.183 } 00:13:48.183 ], 00:13:48.183 "driver_specific": {} 00:13:48.183 } 00:13:48.183 ] 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.183 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.184 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.442 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.442 "name": "Existed_Raid", 00:13:48.442 "uuid": "34e570a0-8966-4866-8971-78cc65c4411d", 00:13:48.442 "strip_size_kb": 0, 00:13:48.442 "state": "configuring", 00:13:48.442 "raid_level": "raid1", 00:13:48.442 "superblock": true, 00:13:48.442 "num_base_bdevs": 4, 00:13:48.442 "num_base_bdevs_discovered": 1, 00:13:48.442 "num_base_bdevs_operational": 4, 00:13:48.442 "base_bdevs_list": [ 00:13:48.442 { 00:13:48.442 "name": "BaseBdev1", 00:13:48.442 "uuid": "9b83d9f5-1224-481d-872e-606f85781684", 00:13:48.442 "is_configured": true, 00:13:48.442 "data_offset": 2048, 00:13:48.442 "data_size": 63488 00:13:48.442 }, 00:13:48.442 { 00:13:48.442 "name": "BaseBdev2", 00:13:48.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.442 "is_configured": false, 00:13:48.442 "data_offset": 0, 00:13:48.442 "data_size": 0 00:13:48.442 }, 00:13:48.442 { 00:13:48.442 "name": "BaseBdev3", 00:13:48.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.442 "is_configured": false, 00:13:48.442 "data_offset": 0, 00:13:48.442 "data_size": 0 00:13:48.442 }, 00:13:48.442 { 00:13:48.442 "name": "BaseBdev4", 00:13:48.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.442 "is_configured": false, 00:13:48.442 "data_offset": 0, 00:13:48.442 "data_size": 0 00:13:48.442 } 00:13:48.442 ] 00:13:48.442 }' 00:13:48.442 04:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.442 04:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.702 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:48.702 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.702 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.702 [2024-11-27 04:30:45.199230] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:48.702 [2024-11-27 04:30:45.199298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:48.702 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.702 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:48.702 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.702 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.702 [2024-11-27 04:30:45.211341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:48.702 [2024-11-27 04:30:45.213835] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:48.702 [2024-11-27 04:30:45.213944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:48.702 [2024-11-27 04:30:45.213977] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:48.702 [2024-11-27 04:30:45.214001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:48.702 [2024-11-27 04:30:45.214114] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:48.703 [2024-11-27 04:30:45.214149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:48.703 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.703 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:48.703 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:48.703 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:48.703 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.703 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.703 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.703 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.703 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:48.703 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.703 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.703 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.703 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.703 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.703 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.703 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.703 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.703 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.703 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.703 "name": "Existed_Raid", 00:13:48.703 "uuid": "a139aa9d-2f6e-40ed-83db-5c6e335ab80a", 00:13:48.703 "strip_size_kb": 0, 00:13:48.703 "state": "configuring", 00:13:48.703 "raid_level": "raid1", 00:13:48.703 "superblock": true, 00:13:48.703 "num_base_bdevs": 4, 00:13:48.703 "num_base_bdevs_discovered": 1, 00:13:48.703 "num_base_bdevs_operational": 4, 00:13:48.703 "base_bdevs_list": [ 00:13:48.703 { 00:13:48.703 "name": "BaseBdev1", 00:13:48.703 "uuid": "9b83d9f5-1224-481d-872e-606f85781684", 00:13:48.703 "is_configured": true, 00:13:48.703 "data_offset": 2048, 00:13:48.703 "data_size": 63488 00:13:48.703 }, 00:13:48.703 { 00:13:48.703 "name": "BaseBdev2", 00:13:48.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.703 "is_configured": false, 00:13:48.703 "data_offset": 0, 00:13:48.703 "data_size": 0 00:13:48.703 }, 00:13:48.703 { 00:13:48.703 "name": "BaseBdev3", 00:13:48.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.703 "is_configured": false, 00:13:48.703 "data_offset": 0, 00:13:48.703 "data_size": 0 00:13:48.703 }, 00:13:48.703 { 00:13:48.703 "name": "BaseBdev4", 00:13:48.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.703 "is_configured": false, 00:13:48.703 "data_offset": 0, 00:13:48.703 "data_size": 0 00:13:48.703 } 00:13:48.703 ] 00:13:48.703 }' 00:13:48.703 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.703 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.271 [2024-11-27 04:30:45.639575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:49.271 BaseBdev2 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.271 [ 00:13:49.271 { 00:13:49.271 "name": "BaseBdev2", 00:13:49.271 "aliases": [ 00:13:49.271 "db291e5a-3402-4e21-9e9e-a3c25790ba93" 00:13:49.271 ], 00:13:49.271 "product_name": "Malloc disk", 00:13:49.271 "block_size": 512, 00:13:49.271 "num_blocks": 65536, 00:13:49.271 "uuid": "db291e5a-3402-4e21-9e9e-a3c25790ba93", 00:13:49.271 "assigned_rate_limits": { 00:13:49.271 "rw_ios_per_sec": 0, 00:13:49.271 "rw_mbytes_per_sec": 0, 00:13:49.271 "r_mbytes_per_sec": 0, 00:13:49.271 "w_mbytes_per_sec": 0 00:13:49.271 }, 00:13:49.271 "claimed": true, 00:13:49.271 "claim_type": "exclusive_write", 00:13:49.271 "zoned": false, 00:13:49.271 "supported_io_types": { 00:13:49.271 "read": true, 00:13:49.271 "write": true, 00:13:49.271 "unmap": true, 00:13:49.271 "flush": true, 00:13:49.271 "reset": true, 00:13:49.271 "nvme_admin": false, 00:13:49.271 "nvme_io": false, 00:13:49.271 "nvme_io_md": false, 00:13:49.271 "write_zeroes": true, 00:13:49.271 "zcopy": true, 00:13:49.271 "get_zone_info": false, 00:13:49.271 "zone_management": false, 00:13:49.271 "zone_append": false, 00:13:49.271 "compare": false, 00:13:49.271 "compare_and_write": false, 00:13:49.271 "abort": true, 00:13:49.271 "seek_hole": false, 00:13:49.271 "seek_data": false, 00:13:49.271 "copy": true, 00:13:49.271 "nvme_iov_md": false 00:13:49.271 }, 00:13:49.271 "memory_domains": [ 00:13:49.271 { 00:13:49.271 "dma_device_id": "system", 00:13:49.271 "dma_device_type": 1 00:13:49.271 }, 00:13:49.271 { 00:13:49.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.271 "dma_device_type": 2 00:13:49.271 } 00:13:49.271 ], 00:13:49.271 "driver_specific": {} 00:13:49.271 } 00:13:49.271 ] 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.271 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.271 "name": "Existed_Raid", 00:13:49.271 "uuid": "a139aa9d-2f6e-40ed-83db-5c6e335ab80a", 00:13:49.271 "strip_size_kb": 0, 00:13:49.271 "state": "configuring", 00:13:49.271 "raid_level": "raid1", 00:13:49.271 "superblock": true, 00:13:49.271 "num_base_bdevs": 4, 00:13:49.271 "num_base_bdevs_discovered": 2, 00:13:49.271 "num_base_bdevs_operational": 4, 00:13:49.271 "base_bdevs_list": [ 00:13:49.271 { 00:13:49.271 "name": "BaseBdev1", 00:13:49.271 "uuid": "9b83d9f5-1224-481d-872e-606f85781684", 00:13:49.271 "is_configured": true, 00:13:49.271 "data_offset": 2048, 00:13:49.271 "data_size": 63488 00:13:49.271 }, 00:13:49.271 { 00:13:49.271 "name": "BaseBdev2", 00:13:49.271 "uuid": "db291e5a-3402-4e21-9e9e-a3c25790ba93", 00:13:49.271 "is_configured": true, 00:13:49.271 "data_offset": 2048, 00:13:49.271 "data_size": 63488 00:13:49.271 }, 00:13:49.271 { 00:13:49.271 "name": "BaseBdev3", 00:13:49.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.271 "is_configured": false, 00:13:49.271 "data_offset": 0, 00:13:49.271 "data_size": 0 00:13:49.271 }, 00:13:49.272 { 00:13:49.272 "name": "BaseBdev4", 00:13:49.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.272 "is_configured": false, 00:13:49.272 "data_offset": 0, 00:13:49.272 "data_size": 0 00:13:49.272 } 00:13:49.272 ] 00:13:49.272 }' 00:13:49.272 04:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.272 04:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.530 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:49.530 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.530 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.788 [2024-11-27 04:30:46.159651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:49.788 BaseBdev3 00:13:49.788 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.788 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:49.788 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:49.788 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:49.788 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:49.788 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:49.788 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:49.788 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:49.788 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.788 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.788 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.788 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:49.788 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.788 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.788 [ 00:13:49.788 { 00:13:49.788 "name": "BaseBdev3", 00:13:49.788 "aliases": [ 00:13:49.788 "49b213d1-1945-4095-81b4-7f122bb747a7" 00:13:49.788 ], 00:13:49.788 "product_name": "Malloc disk", 00:13:49.788 "block_size": 512, 00:13:49.788 "num_blocks": 65536, 00:13:49.788 "uuid": "49b213d1-1945-4095-81b4-7f122bb747a7", 00:13:49.788 "assigned_rate_limits": { 00:13:49.788 "rw_ios_per_sec": 0, 00:13:49.788 "rw_mbytes_per_sec": 0, 00:13:49.788 "r_mbytes_per_sec": 0, 00:13:49.788 "w_mbytes_per_sec": 0 00:13:49.788 }, 00:13:49.788 "claimed": true, 00:13:49.788 "claim_type": "exclusive_write", 00:13:49.788 "zoned": false, 00:13:49.788 "supported_io_types": { 00:13:49.788 "read": true, 00:13:49.788 "write": true, 00:13:49.788 "unmap": true, 00:13:49.788 "flush": true, 00:13:49.788 "reset": true, 00:13:49.788 "nvme_admin": false, 00:13:49.788 "nvme_io": false, 00:13:49.788 "nvme_io_md": false, 00:13:49.788 "write_zeroes": true, 00:13:49.788 "zcopy": true, 00:13:49.788 "get_zone_info": false, 00:13:49.788 "zone_management": false, 00:13:49.788 "zone_append": false, 00:13:49.788 "compare": false, 00:13:49.788 "compare_and_write": false, 00:13:49.788 "abort": true, 00:13:49.788 "seek_hole": false, 00:13:49.788 "seek_data": false, 00:13:49.788 "copy": true, 00:13:49.788 "nvme_iov_md": false 00:13:49.788 }, 00:13:49.788 "memory_domains": [ 00:13:49.788 { 00:13:49.788 "dma_device_id": "system", 00:13:49.788 "dma_device_type": 1 00:13:49.788 }, 00:13:49.788 { 00:13:49.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.788 "dma_device_type": 2 00:13:49.788 } 00:13:49.788 ], 00:13:49.788 "driver_specific": {} 00:13:49.788 } 00:13:49.788 ] 00:13:49.788 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.788 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:49.788 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:49.788 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:49.788 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:49.788 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.788 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.789 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.789 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.789 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:49.789 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.789 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.789 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.789 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.789 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.789 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.789 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.789 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.789 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.789 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.789 "name": "Existed_Raid", 00:13:49.789 "uuid": "a139aa9d-2f6e-40ed-83db-5c6e335ab80a", 00:13:49.789 "strip_size_kb": 0, 00:13:49.789 "state": "configuring", 00:13:49.789 "raid_level": "raid1", 00:13:49.789 "superblock": true, 00:13:49.789 "num_base_bdevs": 4, 00:13:49.789 "num_base_bdevs_discovered": 3, 00:13:49.789 "num_base_bdevs_operational": 4, 00:13:49.789 "base_bdevs_list": [ 00:13:49.789 { 00:13:49.789 "name": "BaseBdev1", 00:13:49.789 "uuid": "9b83d9f5-1224-481d-872e-606f85781684", 00:13:49.789 "is_configured": true, 00:13:49.789 "data_offset": 2048, 00:13:49.789 "data_size": 63488 00:13:49.789 }, 00:13:49.789 { 00:13:49.789 "name": "BaseBdev2", 00:13:49.789 "uuid": "db291e5a-3402-4e21-9e9e-a3c25790ba93", 00:13:49.789 "is_configured": true, 00:13:49.789 "data_offset": 2048, 00:13:49.789 "data_size": 63488 00:13:49.789 }, 00:13:49.789 { 00:13:49.789 "name": "BaseBdev3", 00:13:49.789 "uuid": "49b213d1-1945-4095-81b4-7f122bb747a7", 00:13:49.789 "is_configured": true, 00:13:49.789 "data_offset": 2048, 00:13:49.789 "data_size": 63488 00:13:49.789 }, 00:13:49.789 { 00:13:49.789 "name": "BaseBdev4", 00:13:49.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.789 "is_configured": false, 00:13:49.789 "data_offset": 0, 00:13:49.789 "data_size": 0 00:13:49.789 } 00:13:49.789 ] 00:13:49.789 }' 00:13:49.789 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.789 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.356 [2024-11-27 04:30:46.699397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:50.356 [2024-11-27 04:30:46.699828] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:50.356 [2024-11-27 04:30:46.699882] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:50.356 [2024-11-27 04:30:46.700226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:50.356 [2024-11-27 04:30:46.700436] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:50.356 [2024-11-27 04:30:46.700486] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:50.356 BaseBdev4 00:13:50.356 [2024-11-27 04:30:46.700684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.356 [ 00:13:50.356 { 00:13:50.356 "name": "BaseBdev4", 00:13:50.356 "aliases": [ 00:13:50.356 "bc9cb14c-dc76-4d85-ba5d-e9b6e21bcd00" 00:13:50.356 ], 00:13:50.356 "product_name": "Malloc disk", 00:13:50.356 "block_size": 512, 00:13:50.356 "num_blocks": 65536, 00:13:50.356 "uuid": "bc9cb14c-dc76-4d85-ba5d-e9b6e21bcd00", 00:13:50.356 "assigned_rate_limits": { 00:13:50.356 "rw_ios_per_sec": 0, 00:13:50.356 "rw_mbytes_per_sec": 0, 00:13:50.356 "r_mbytes_per_sec": 0, 00:13:50.356 "w_mbytes_per_sec": 0 00:13:50.356 }, 00:13:50.356 "claimed": true, 00:13:50.356 "claim_type": "exclusive_write", 00:13:50.356 "zoned": false, 00:13:50.356 "supported_io_types": { 00:13:50.356 "read": true, 00:13:50.356 "write": true, 00:13:50.356 "unmap": true, 00:13:50.356 "flush": true, 00:13:50.356 "reset": true, 00:13:50.356 "nvme_admin": false, 00:13:50.356 "nvme_io": false, 00:13:50.356 "nvme_io_md": false, 00:13:50.356 "write_zeroes": true, 00:13:50.356 "zcopy": true, 00:13:50.356 "get_zone_info": false, 00:13:50.356 "zone_management": false, 00:13:50.356 "zone_append": false, 00:13:50.356 "compare": false, 00:13:50.356 "compare_and_write": false, 00:13:50.356 "abort": true, 00:13:50.356 "seek_hole": false, 00:13:50.356 "seek_data": false, 00:13:50.356 "copy": true, 00:13:50.356 "nvme_iov_md": false 00:13:50.356 }, 00:13:50.356 "memory_domains": [ 00:13:50.356 { 00:13:50.356 "dma_device_id": "system", 00:13:50.356 "dma_device_type": 1 00:13:50.356 }, 00:13:50.356 { 00:13:50.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.356 "dma_device_type": 2 00:13:50.356 } 00:13:50.356 ], 00:13:50.356 "driver_specific": {} 00:13:50.356 } 00:13:50.356 ] 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.356 "name": "Existed_Raid", 00:13:50.356 "uuid": "a139aa9d-2f6e-40ed-83db-5c6e335ab80a", 00:13:50.356 "strip_size_kb": 0, 00:13:50.356 "state": "online", 00:13:50.356 "raid_level": "raid1", 00:13:50.356 "superblock": true, 00:13:50.356 "num_base_bdevs": 4, 00:13:50.356 "num_base_bdevs_discovered": 4, 00:13:50.356 "num_base_bdevs_operational": 4, 00:13:50.356 "base_bdevs_list": [ 00:13:50.356 { 00:13:50.356 "name": "BaseBdev1", 00:13:50.356 "uuid": "9b83d9f5-1224-481d-872e-606f85781684", 00:13:50.356 "is_configured": true, 00:13:50.356 "data_offset": 2048, 00:13:50.356 "data_size": 63488 00:13:50.356 }, 00:13:50.356 { 00:13:50.356 "name": "BaseBdev2", 00:13:50.356 "uuid": "db291e5a-3402-4e21-9e9e-a3c25790ba93", 00:13:50.356 "is_configured": true, 00:13:50.356 "data_offset": 2048, 00:13:50.356 "data_size": 63488 00:13:50.356 }, 00:13:50.356 { 00:13:50.356 "name": "BaseBdev3", 00:13:50.356 "uuid": "49b213d1-1945-4095-81b4-7f122bb747a7", 00:13:50.356 "is_configured": true, 00:13:50.356 "data_offset": 2048, 00:13:50.356 "data_size": 63488 00:13:50.356 }, 00:13:50.356 { 00:13:50.356 "name": "BaseBdev4", 00:13:50.356 "uuid": "bc9cb14c-dc76-4d85-ba5d-e9b6e21bcd00", 00:13:50.356 "is_configured": true, 00:13:50.356 "data_offset": 2048, 00:13:50.356 "data_size": 63488 00:13:50.356 } 00:13:50.356 ] 00:13:50.356 }' 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.356 04:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.616 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:50.616 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:50.616 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:50.616 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:50.616 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:50.616 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:50.616 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:50.616 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:50.616 04:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.616 04:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.616 [2024-11-27 04:30:47.175117] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:50.616 04:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.875 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:50.875 "name": "Existed_Raid", 00:13:50.875 "aliases": [ 00:13:50.875 "a139aa9d-2f6e-40ed-83db-5c6e335ab80a" 00:13:50.875 ], 00:13:50.875 "product_name": "Raid Volume", 00:13:50.875 "block_size": 512, 00:13:50.875 "num_blocks": 63488, 00:13:50.875 "uuid": "a139aa9d-2f6e-40ed-83db-5c6e335ab80a", 00:13:50.875 "assigned_rate_limits": { 00:13:50.875 "rw_ios_per_sec": 0, 00:13:50.875 "rw_mbytes_per_sec": 0, 00:13:50.875 "r_mbytes_per_sec": 0, 00:13:50.875 "w_mbytes_per_sec": 0 00:13:50.875 }, 00:13:50.875 "claimed": false, 00:13:50.875 "zoned": false, 00:13:50.875 "supported_io_types": { 00:13:50.875 "read": true, 00:13:50.875 "write": true, 00:13:50.875 "unmap": false, 00:13:50.875 "flush": false, 00:13:50.875 "reset": true, 00:13:50.875 "nvme_admin": false, 00:13:50.875 "nvme_io": false, 00:13:50.875 "nvme_io_md": false, 00:13:50.875 "write_zeroes": true, 00:13:50.875 "zcopy": false, 00:13:50.875 "get_zone_info": false, 00:13:50.875 "zone_management": false, 00:13:50.875 "zone_append": false, 00:13:50.875 "compare": false, 00:13:50.875 "compare_and_write": false, 00:13:50.875 "abort": false, 00:13:50.875 "seek_hole": false, 00:13:50.875 "seek_data": false, 00:13:50.875 "copy": false, 00:13:50.875 "nvme_iov_md": false 00:13:50.875 }, 00:13:50.875 "memory_domains": [ 00:13:50.875 { 00:13:50.875 "dma_device_id": "system", 00:13:50.875 "dma_device_type": 1 00:13:50.875 }, 00:13:50.875 { 00:13:50.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.875 "dma_device_type": 2 00:13:50.875 }, 00:13:50.875 { 00:13:50.875 "dma_device_id": "system", 00:13:50.875 "dma_device_type": 1 00:13:50.875 }, 00:13:50.875 { 00:13:50.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.875 "dma_device_type": 2 00:13:50.875 }, 00:13:50.875 { 00:13:50.875 "dma_device_id": "system", 00:13:50.875 "dma_device_type": 1 00:13:50.875 }, 00:13:50.875 { 00:13:50.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.875 "dma_device_type": 2 00:13:50.875 }, 00:13:50.875 { 00:13:50.876 "dma_device_id": "system", 00:13:50.876 "dma_device_type": 1 00:13:50.876 }, 00:13:50.876 { 00:13:50.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.876 "dma_device_type": 2 00:13:50.876 } 00:13:50.876 ], 00:13:50.876 "driver_specific": { 00:13:50.876 "raid": { 00:13:50.876 "uuid": "a139aa9d-2f6e-40ed-83db-5c6e335ab80a", 00:13:50.876 "strip_size_kb": 0, 00:13:50.876 "state": "online", 00:13:50.876 "raid_level": "raid1", 00:13:50.876 "superblock": true, 00:13:50.876 "num_base_bdevs": 4, 00:13:50.876 "num_base_bdevs_discovered": 4, 00:13:50.876 "num_base_bdevs_operational": 4, 00:13:50.876 "base_bdevs_list": [ 00:13:50.876 { 00:13:50.876 "name": "BaseBdev1", 00:13:50.876 "uuid": "9b83d9f5-1224-481d-872e-606f85781684", 00:13:50.876 "is_configured": true, 00:13:50.876 "data_offset": 2048, 00:13:50.876 "data_size": 63488 00:13:50.876 }, 00:13:50.876 { 00:13:50.876 "name": "BaseBdev2", 00:13:50.876 "uuid": "db291e5a-3402-4e21-9e9e-a3c25790ba93", 00:13:50.876 "is_configured": true, 00:13:50.876 "data_offset": 2048, 00:13:50.876 "data_size": 63488 00:13:50.876 }, 00:13:50.876 { 00:13:50.876 "name": "BaseBdev3", 00:13:50.876 "uuid": "49b213d1-1945-4095-81b4-7f122bb747a7", 00:13:50.876 "is_configured": true, 00:13:50.876 "data_offset": 2048, 00:13:50.876 "data_size": 63488 00:13:50.876 }, 00:13:50.876 { 00:13:50.876 "name": "BaseBdev4", 00:13:50.876 "uuid": "bc9cb14c-dc76-4d85-ba5d-e9b6e21bcd00", 00:13:50.876 "is_configured": true, 00:13:50.876 "data_offset": 2048, 00:13:50.876 "data_size": 63488 00:13:50.876 } 00:13:50.876 ] 00:13:50.876 } 00:13:50.876 } 00:13:50.876 }' 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:50.876 BaseBdev2 00:13:50.876 BaseBdev3 00:13:50.876 BaseBdev4' 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:50.876 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.137 [2024-11-27 04:30:47.514280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.137 "name": "Existed_Raid", 00:13:51.137 "uuid": "a139aa9d-2f6e-40ed-83db-5c6e335ab80a", 00:13:51.137 "strip_size_kb": 0, 00:13:51.137 "state": "online", 00:13:51.137 "raid_level": "raid1", 00:13:51.137 "superblock": true, 00:13:51.137 "num_base_bdevs": 4, 00:13:51.137 "num_base_bdevs_discovered": 3, 00:13:51.137 "num_base_bdevs_operational": 3, 00:13:51.137 "base_bdevs_list": [ 00:13:51.137 { 00:13:51.137 "name": null, 00:13:51.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.137 "is_configured": false, 00:13:51.137 "data_offset": 0, 00:13:51.137 "data_size": 63488 00:13:51.137 }, 00:13:51.137 { 00:13:51.137 "name": "BaseBdev2", 00:13:51.137 "uuid": "db291e5a-3402-4e21-9e9e-a3c25790ba93", 00:13:51.137 "is_configured": true, 00:13:51.137 "data_offset": 2048, 00:13:51.137 "data_size": 63488 00:13:51.137 }, 00:13:51.137 { 00:13:51.137 "name": "BaseBdev3", 00:13:51.137 "uuid": "49b213d1-1945-4095-81b4-7f122bb747a7", 00:13:51.137 "is_configured": true, 00:13:51.137 "data_offset": 2048, 00:13:51.137 "data_size": 63488 00:13:51.137 }, 00:13:51.137 { 00:13:51.137 "name": "BaseBdev4", 00:13:51.137 "uuid": "bc9cb14c-dc76-4d85-ba5d-e9b6e21bcd00", 00:13:51.137 "is_configured": true, 00:13:51.137 "data_offset": 2048, 00:13:51.137 "data_size": 63488 00:13:51.137 } 00:13:51.137 ] 00:13:51.137 }' 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.137 04:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.709 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:51.709 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:51.709 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:51.709 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.709 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.709 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.709 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.709 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:51.709 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:51.709 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:51.709 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.709 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.709 [2024-11-27 04:30:48.157078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:51.709 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.709 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:51.709 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:51.709 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:51.709 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.709 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.709 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.968 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.968 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:51.968 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:51.968 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:51.968 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.968 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.968 [2024-11-27 04:30:48.318740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:51.968 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.968 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:51.968 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:51.968 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.968 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:51.968 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.968 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.968 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.968 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:51.968 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:51.968 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:51.968 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.968 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.968 [2024-11-27 04:30:48.490028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:51.968 [2024-11-27 04:30:48.490185] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:52.228 [2024-11-27 04:30:48.598609] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:52.228 [2024-11-27 04:30:48.598691] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:52.228 [2024-11-27 04:30:48.598705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.228 BaseBdev2 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.228 [ 00:13:52.228 { 00:13:52.228 "name": "BaseBdev2", 00:13:52.228 "aliases": [ 00:13:52.228 "e64f7578-9c35-4a1d-8307-f6a0bb071028" 00:13:52.228 ], 00:13:52.228 "product_name": "Malloc disk", 00:13:52.228 "block_size": 512, 00:13:52.228 "num_blocks": 65536, 00:13:52.228 "uuid": "e64f7578-9c35-4a1d-8307-f6a0bb071028", 00:13:52.228 "assigned_rate_limits": { 00:13:52.228 "rw_ios_per_sec": 0, 00:13:52.228 "rw_mbytes_per_sec": 0, 00:13:52.228 "r_mbytes_per_sec": 0, 00:13:52.228 "w_mbytes_per_sec": 0 00:13:52.228 }, 00:13:52.228 "claimed": false, 00:13:52.228 "zoned": false, 00:13:52.228 "supported_io_types": { 00:13:52.228 "read": true, 00:13:52.228 "write": true, 00:13:52.228 "unmap": true, 00:13:52.228 "flush": true, 00:13:52.228 "reset": true, 00:13:52.228 "nvme_admin": false, 00:13:52.228 "nvme_io": false, 00:13:52.228 "nvme_io_md": false, 00:13:52.228 "write_zeroes": true, 00:13:52.228 "zcopy": true, 00:13:52.228 "get_zone_info": false, 00:13:52.228 "zone_management": false, 00:13:52.228 "zone_append": false, 00:13:52.228 "compare": false, 00:13:52.228 "compare_and_write": false, 00:13:52.228 "abort": true, 00:13:52.228 "seek_hole": false, 00:13:52.228 "seek_data": false, 00:13:52.228 "copy": true, 00:13:52.228 "nvme_iov_md": false 00:13:52.228 }, 00:13:52.228 "memory_domains": [ 00:13:52.228 { 00:13:52.228 "dma_device_id": "system", 00:13:52.228 "dma_device_type": 1 00:13:52.228 }, 00:13:52.228 { 00:13:52.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.228 "dma_device_type": 2 00:13:52.228 } 00:13:52.228 ], 00:13:52.228 "driver_specific": {} 00:13:52.228 } 00:13:52.228 ] 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.228 BaseBdev3 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:52.228 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:52.229 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:52.229 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:52.229 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:52.229 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:52.229 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:52.229 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.229 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.488 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.488 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:52.488 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.488 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.488 [ 00:13:52.488 { 00:13:52.488 "name": "BaseBdev3", 00:13:52.488 "aliases": [ 00:13:52.488 "01083162-6d49-4d41-b38a-ad7472830fea" 00:13:52.488 ], 00:13:52.488 "product_name": "Malloc disk", 00:13:52.488 "block_size": 512, 00:13:52.488 "num_blocks": 65536, 00:13:52.488 "uuid": "01083162-6d49-4d41-b38a-ad7472830fea", 00:13:52.488 "assigned_rate_limits": { 00:13:52.488 "rw_ios_per_sec": 0, 00:13:52.488 "rw_mbytes_per_sec": 0, 00:13:52.489 "r_mbytes_per_sec": 0, 00:13:52.489 "w_mbytes_per_sec": 0 00:13:52.489 }, 00:13:52.489 "claimed": false, 00:13:52.489 "zoned": false, 00:13:52.489 "supported_io_types": { 00:13:52.489 "read": true, 00:13:52.489 "write": true, 00:13:52.489 "unmap": true, 00:13:52.489 "flush": true, 00:13:52.489 "reset": true, 00:13:52.489 "nvme_admin": false, 00:13:52.489 "nvme_io": false, 00:13:52.489 "nvme_io_md": false, 00:13:52.489 "write_zeroes": true, 00:13:52.489 "zcopy": true, 00:13:52.489 "get_zone_info": false, 00:13:52.489 "zone_management": false, 00:13:52.489 "zone_append": false, 00:13:52.489 "compare": false, 00:13:52.489 "compare_and_write": false, 00:13:52.489 "abort": true, 00:13:52.489 "seek_hole": false, 00:13:52.489 "seek_data": false, 00:13:52.489 "copy": true, 00:13:52.489 "nvme_iov_md": false 00:13:52.489 }, 00:13:52.489 "memory_domains": [ 00:13:52.489 { 00:13:52.489 "dma_device_id": "system", 00:13:52.489 "dma_device_type": 1 00:13:52.489 }, 00:13:52.489 { 00:13:52.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.489 "dma_device_type": 2 00:13:52.489 } 00:13:52.489 ], 00:13:52.489 "driver_specific": {} 00:13:52.489 } 00:13:52.489 ] 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.489 BaseBdev4 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.489 [ 00:13:52.489 { 00:13:52.489 "name": "BaseBdev4", 00:13:52.489 "aliases": [ 00:13:52.489 "9cfc3f91-0844-43d1-83bd-0a83f42f509a" 00:13:52.489 ], 00:13:52.489 "product_name": "Malloc disk", 00:13:52.489 "block_size": 512, 00:13:52.489 "num_blocks": 65536, 00:13:52.489 "uuid": "9cfc3f91-0844-43d1-83bd-0a83f42f509a", 00:13:52.489 "assigned_rate_limits": { 00:13:52.489 "rw_ios_per_sec": 0, 00:13:52.489 "rw_mbytes_per_sec": 0, 00:13:52.489 "r_mbytes_per_sec": 0, 00:13:52.489 "w_mbytes_per_sec": 0 00:13:52.489 }, 00:13:52.489 "claimed": false, 00:13:52.489 "zoned": false, 00:13:52.489 "supported_io_types": { 00:13:52.489 "read": true, 00:13:52.489 "write": true, 00:13:52.489 "unmap": true, 00:13:52.489 "flush": true, 00:13:52.489 "reset": true, 00:13:52.489 "nvme_admin": false, 00:13:52.489 "nvme_io": false, 00:13:52.489 "nvme_io_md": false, 00:13:52.489 "write_zeroes": true, 00:13:52.489 "zcopy": true, 00:13:52.489 "get_zone_info": false, 00:13:52.489 "zone_management": false, 00:13:52.489 "zone_append": false, 00:13:52.489 "compare": false, 00:13:52.489 "compare_and_write": false, 00:13:52.489 "abort": true, 00:13:52.489 "seek_hole": false, 00:13:52.489 "seek_data": false, 00:13:52.489 "copy": true, 00:13:52.489 "nvme_iov_md": false 00:13:52.489 }, 00:13:52.489 "memory_domains": [ 00:13:52.489 { 00:13:52.489 "dma_device_id": "system", 00:13:52.489 "dma_device_type": 1 00:13:52.489 }, 00:13:52.489 { 00:13:52.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.489 "dma_device_type": 2 00:13:52.489 } 00:13:52.489 ], 00:13:52.489 "driver_specific": {} 00:13:52.489 } 00:13:52.489 ] 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.489 [2024-11-27 04:30:48.941877] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:52.489 [2024-11-27 04:30:48.941989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:52.489 [2024-11-27 04:30:48.942036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:52.489 [2024-11-27 04:30:48.944576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:52.489 [2024-11-27 04:30:48.944670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.489 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.489 "name": "Existed_Raid", 00:13:52.489 "uuid": "f86d7556-6aca-4e7b-aed3-462a6b9d52fe", 00:13:52.489 "strip_size_kb": 0, 00:13:52.489 "state": "configuring", 00:13:52.489 "raid_level": "raid1", 00:13:52.489 "superblock": true, 00:13:52.489 "num_base_bdevs": 4, 00:13:52.489 "num_base_bdevs_discovered": 3, 00:13:52.489 "num_base_bdevs_operational": 4, 00:13:52.489 "base_bdevs_list": [ 00:13:52.489 { 00:13:52.489 "name": "BaseBdev1", 00:13:52.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.489 "is_configured": false, 00:13:52.489 "data_offset": 0, 00:13:52.489 "data_size": 0 00:13:52.489 }, 00:13:52.489 { 00:13:52.489 "name": "BaseBdev2", 00:13:52.489 "uuid": "e64f7578-9c35-4a1d-8307-f6a0bb071028", 00:13:52.489 "is_configured": true, 00:13:52.489 "data_offset": 2048, 00:13:52.489 "data_size": 63488 00:13:52.489 }, 00:13:52.489 { 00:13:52.489 "name": "BaseBdev3", 00:13:52.489 "uuid": "01083162-6d49-4d41-b38a-ad7472830fea", 00:13:52.489 "is_configured": true, 00:13:52.489 "data_offset": 2048, 00:13:52.489 "data_size": 63488 00:13:52.490 }, 00:13:52.490 { 00:13:52.490 "name": "BaseBdev4", 00:13:52.490 "uuid": "9cfc3f91-0844-43d1-83bd-0a83f42f509a", 00:13:52.490 "is_configured": true, 00:13:52.490 "data_offset": 2048, 00:13:52.490 "data_size": 63488 00:13:52.490 } 00:13:52.490 ] 00:13:52.490 }' 00:13:52.490 04:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.490 04:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.056 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:53.056 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.056 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.056 [2024-11-27 04:30:49.393152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:53.056 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.056 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:53.056 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.057 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.057 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.057 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.057 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.057 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.057 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.057 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.057 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.057 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.057 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.057 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.057 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.057 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.057 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.057 "name": "Existed_Raid", 00:13:53.057 "uuid": "f86d7556-6aca-4e7b-aed3-462a6b9d52fe", 00:13:53.057 "strip_size_kb": 0, 00:13:53.057 "state": "configuring", 00:13:53.057 "raid_level": "raid1", 00:13:53.057 "superblock": true, 00:13:53.057 "num_base_bdevs": 4, 00:13:53.057 "num_base_bdevs_discovered": 2, 00:13:53.057 "num_base_bdevs_operational": 4, 00:13:53.057 "base_bdevs_list": [ 00:13:53.057 { 00:13:53.057 "name": "BaseBdev1", 00:13:53.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.057 "is_configured": false, 00:13:53.057 "data_offset": 0, 00:13:53.057 "data_size": 0 00:13:53.057 }, 00:13:53.057 { 00:13:53.057 "name": null, 00:13:53.057 "uuid": "e64f7578-9c35-4a1d-8307-f6a0bb071028", 00:13:53.057 "is_configured": false, 00:13:53.057 "data_offset": 0, 00:13:53.057 "data_size": 63488 00:13:53.057 }, 00:13:53.057 { 00:13:53.057 "name": "BaseBdev3", 00:13:53.057 "uuid": "01083162-6d49-4d41-b38a-ad7472830fea", 00:13:53.057 "is_configured": true, 00:13:53.057 "data_offset": 2048, 00:13:53.057 "data_size": 63488 00:13:53.057 }, 00:13:53.057 { 00:13:53.057 "name": "BaseBdev4", 00:13:53.057 "uuid": "9cfc3f91-0844-43d1-83bd-0a83f42f509a", 00:13:53.057 "is_configured": true, 00:13:53.057 "data_offset": 2048, 00:13:53.057 "data_size": 63488 00:13:53.057 } 00:13:53.057 ] 00:13:53.057 }' 00:13:53.057 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.057 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.317 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.317 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:53.317 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.317 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.317 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.317 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:53.317 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:53.317 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.317 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.575 [2024-11-27 04:30:49.937397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:53.575 BaseBdev1 00:13:53.575 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.575 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:53.575 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:53.575 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:53.575 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:53.575 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:53.575 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:53.576 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:53.576 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.576 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.576 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.576 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:53.576 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.576 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.576 [ 00:13:53.576 { 00:13:53.576 "name": "BaseBdev1", 00:13:53.576 "aliases": [ 00:13:53.576 "c7178009-46a1-4e23-b493-f4a5ba84001a" 00:13:53.576 ], 00:13:53.576 "product_name": "Malloc disk", 00:13:53.576 "block_size": 512, 00:13:53.576 "num_blocks": 65536, 00:13:53.576 "uuid": "c7178009-46a1-4e23-b493-f4a5ba84001a", 00:13:53.576 "assigned_rate_limits": { 00:13:53.576 "rw_ios_per_sec": 0, 00:13:53.576 "rw_mbytes_per_sec": 0, 00:13:53.576 "r_mbytes_per_sec": 0, 00:13:53.576 "w_mbytes_per_sec": 0 00:13:53.576 }, 00:13:53.576 "claimed": true, 00:13:53.576 "claim_type": "exclusive_write", 00:13:53.576 "zoned": false, 00:13:53.576 "supported_io_types": { 00:13:53.576 "read": true, 00:13:53.576 "write": true, 00:13:53.576 "unmap": true, 00:13:53.576 "flush": true, 00:13:53.576 "reset": true, 00:13:53.576 "nvme_admin": false, 00:13:53.576 "nvme_io": false, 00:13:53.576 "nvme_io_md": false, 00:13:53.576 "write_zeroes": true, 00:13:53.576 "zcopy": true, 00:13:53.576 "get_zone_info": false, 00:13:53.576 "zone_management": false, 00:13:53.576 "zone_append": false, 00:13:53.576 "compare": false, 00:13:53.576 "compare_and_write": false, 00:13:53.576 "abort": true, 00:13:53.576 "seek_hole": false, 00:13:53.576 "seek_data": false, 00:13:53.576 "copy": true, 00:13:53.576 "nvme_iov_md": false 00:13:53.576 }, 00:13:53.576 "memory_domains": [ 00:13:53.576 { 00:13:53.576 "dma_device_id": "system", 00:13:53.576 "dma_device_type": 1 00:13:53.576 }, 00:13:53.576 { 00:13:53.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.576 "dma_device_type": 2 00:13:53.576 } 00:13:53.576 ], 00:13:53.576 "driver_specific": {} 00:13:53.576 } 00:13:53.576 ] 00:13:53.576 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.576 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:53.576 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:53.576 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.576 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.576 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.576 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.576 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.576 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.576 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.576 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.576 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.576 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.576 04:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.576 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.576 04:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.576 04:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.576 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.576 "name": "Existed_Raid", 00:13:53.576 "uuid": "f86d7556-6aca-4e7b-aed3-462a6b9d52fe", 00:13:53.576 "strip_size_kb": 0, 00:13:53.576 "state": "configuring", 00:13:53.576 "raid_level": "raid1", 00:13:53.576 "superblock": true, 00:13:53.576 "num_base_bdevs": 4, 00:13:53.576 "num_base_bdevs_discovered": 3, 00:13:53.576 "num_base_bdevs_operational": 4, 00:13:53.576 "base_bdevs_list": [ 00:13:53.576 { 00:13:53.576 "name": "BaseBdev1", 00:13:53.576 "uuid": "c7178009-46a1-4e23-b493-f4a5ba84001a", 00:13:53.576 "is_configured": true, 00:13:53.576 "data_offset": 2048, 00:13:53.576 "data_size": 63488 00:13:53.576 }, 00:13:53.576 { 00:13:53.576 "name": null, 00:13:53.576 "uuid": "e64f7578-9c35-4a1d-8307-f6a0bb071028", 00:13:53.576 "is_configured": false, 00:13:53.576 "data_offset": 0, 00:13:53.576 "data_size": 63488 00:13:53.576 }, 00:13:53.576 { 00:13:53.576 "name": "BaseBdev3", 00:13:53.576 "uuid": "01083162-6d49-4d41-b38a-ad7472830fea", 00:13:53.576 "is_configured": true, 00:13:53.576 "data_offset": 2048, 00:13:53.576 "data_size": 63488 00:13:53.576 }, 00:13:53.576 { 00:13:53.576 "name": "BaseBdev4", 00:13:53.576 "uuid": "9cfc3f91-0844-43d1-83bd-0a83f42f509a", 00:13:53.576 "is_configured": true, 00:13:53.576 "data_offset": 2048, 00:13:53.576 "data_size": 63488 00:13:53.576 } 00:13:53.576 ] 00:13:53.576 }' 00:13:53.576 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.576 04:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.145 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.145 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:54.145 04:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.145 04:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.145 04:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.145 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:54.145 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:54.145 04:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.145 04:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.145 [2024-11-27 04:30:50.540528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:54.145 04:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.145 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:54.145 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.145 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.145 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.145 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.145 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:54.145 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.145 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.145 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.145 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.145 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.145 04:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.145 04:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.146 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.146 04:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.146 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.146 "name": "Existed_Raid", 00:13:54.146 "uuid": "f86d7556-6aca-4e7b-aed3-462a6b9d52fe", 00:13:54.146 "strip_size_kb": 0, 00:13:54.146 "state": "configuring", 00:13:54.146 "raid_level": "raid1", 00:13:54.146 "superblock": true, 00:13:54.146 "num_base_bdevs": 4, 00:13:54.146 "num_base_bdevs_discovered": 2, 00:13:54.146 "num_base_bdevs_operational": 4, 00:13:54.146 "base_bdevs_list": [ 00:13:54.146 { 00:13:54.146 "name": "BaseBdev1", 00:13:54.146 "uuid": "c7178009-46a1-4e23-b493-f4a5ba84001a", 00:13:54.146 "is_configured": true, 00:13:54.146 "data_offset": 2048, 00:13:54.146 "data_size": 63488 00:13:54.146 }, 00:13:54.146 { 00:13:54.146 "name": null, 00:13:54.146 "uuid": "e64f7578-9c35-4a1d-8307-f6a0bb071028", 00:13:54.146 "is_configured": false, 00:13:54.146 "data_offset": 0, 00:13:54.146 "data_size": 63488 00:13:54.146 }, 00:13:54.146 { 00:13:54.146 "name": null, 00:13:54.146 "uuid": "01083162-6d49-4d41-b38a-ad7472830fea", 00:13:54.146 "is_configured": false, 00:13:54.146 "data_offset": 0, 00:13:54.146 "data_size": 63488 00:13:54.146 }, 00:13:54.146 { 00:13:54.146 "name": "BaseBdev4", 00:13:54.146 "uuid": "9cfc3f91-0844-43d1-83bd-0a83f42f509a", 00:13:54.146 "is_configured": true, 00:13:54.146 "data_offset": 2048, 00:13:54.146 "data_size": 63488 00:13:54.146 } 00:13:54.146 ] 00:13:54.146 }' 00:13:54.146 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.146 04:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.415 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.415 04:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.415 04:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.415 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:54.415 04:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.416 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:54.416 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:54.416 04:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.416 04:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.416 [2024-11-27 04:30:50.967814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:54.416 04:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.416 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:54.416 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.416 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.416 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.416 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.416 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:54.416 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.416 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.416 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.416 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.416 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.416 04:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.416 04:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.416 04:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.416 04:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.675 04:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.675 "name": "Existed_Raid", 00:13:54.675 "uuid": "f86d7556-6aca-4e7b-aed3-462a6b9d52fe", 00:13:54.675 "strip_size_kb": 0, 00:13:54.675 "state": "configuring", 00:13:54.675 "raid_level": "raid1", 00:13:54.675 "superblock": true, 00:13:54.675 "num_base_bdevs": 4, 00:13:54.675 "num_base_bdevs_discovered": 3, 00:13:54.675 "num_base_bdevs_operational": 4, 00:13:54.675 "base_bdevs_list": [ 00:13:54.675 { 00:13:54.675 "name": "BaseBdev1", 00:13:54.675 "uuid": "c7178009-46a1-4e23-b493-f4a5ba84001a", 00:13:54.675 "is_configured": true, 00:13:54.675 "data_offset": 2048, 00:13:54.675 "data_size": 63488 00:13:54.675 }, 00:13:54.675 { 00:13:54.675 "name": null, 00:13:54.675 "uuid": "e64f7578-9c35-4a1d-8307-f6a0bb071028", 00:13:54.675 "is_configured": false, 00:13:54.675 "data_offset": 0, 00:13:54.675 "data_size": 63488 00:13:54.675 }, 00:13:54.675 { 00:13:54.675 "name": "BaseBdev3", 00:13:54.675 "uuid": "01083162-6d49-4d41-b38a-ad7472830fea", 00:13:54.675 "is_configured": true, 00:13:54.675 "data_offset": 2048, 00:13:54.675 "data_size": 63488 00:13:54.675 }, 00:13:54.675 { 00:13:54.675 "name": "BaseBdev4", 00:13:54.675 "uuid": "9cfc3f91-0844-43d1-83bd-0a83f42f509a", 00:13:54.675 "is_configured": true, 00:13:54.675 "data_offset": 2048, 00:13:54.675 "data_size": 63488 00:13:54.675 } 00:13:54.675 ] 00:13:54.675 }' 00:13:54.675 04:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.675 04:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.934 04:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.934 04:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.934 04:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.934 04:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:54.934 04:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.934 04:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:54.934 04:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:54.934 04:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.934 04:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.934 [2024-11-27 04:30:51.499077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:55.193 04:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.193 04:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:55.193 04:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.193 04:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.193 04:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.193 04:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.193 04:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.193 04:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.193 04:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.193 04:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.193 04:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.193 04:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.193 04:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.193 04:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.193 04:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.193 04:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.193 04:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.193 "name": "Existed_Raid", 00:13:55.193 "uuid": "f86d7556-6aca-4e7b-aed3-462a6b9d52fe", 00:13:55.193 "strip_size_kb": 0, 00:13:55.193 "state": "configuring", 00:13:55.193 "raid_level": "raid1", 00:13:55.193 "superblock": true, 00:13:55.193 "num_base_bdevs": 4, 00:13:55.193 "num_base_bdevs_discovered": 2, 00:13:55.193 "num_base_bdevs_operational": 4, 00:13:55.193 "base_bdevs_list": [ 00:13:55.193 { 00:13:55.193 "name": null, 00:13:55.193 "uuid": "c7178009-46a1-4e23-b493-f4a5ba84001a", 00:13:55.193 "is_configured": false, 00:13:55.193 "data_offset": 0, 00:13:55.193 "data_size": 63488 00:13:55.193 }, 00:13:55.193 { 00:13:55.193 "name": null, 00:13:55.193 "uuid": "e64f7578-9c35-4a1d-8307-f6a0bb071028", 00:13:55.193 "is_configured": false, 00:13:55.193 "data_offset": 0, 00:13:55.193 "data_size": 63488 00:13:55.193 }, 00:13:55.193 { 00:13:55.193 "name": "BaseBdev3", 00:13:55.193 "uuid": "01083162-6d49-4d41-b38a-ad7472830fea", 00:13:55.193 "is_configured": true, 00:13:55.193 "data_offset": 2048, 00:13:55.193 "data_size": 63488 00:13:55.193 }, 00:13:55.193 { 00:13:55.193 "name": "BaseBdev4", 00:13:55.193 "uuid": "9cfc3f91-0844-43d1-83bd-0a83f42f509a", 00:13:55.193 "is_configured": true, 00:13:55.194 "data_offset": 2048, 00:13:55.194 "data_size": 63488 00:13:55.194 } 00:13:55.194 ] 00:13:55.194 }' 00:13:55.194 04:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.194 04:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.761 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:55.761 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.761 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.761 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.761 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.761 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:55.761 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:55.761 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.761 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.761 [2024-11-27 04:30:52.142152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:55.761 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.761 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:55.761 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.761 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.761 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.761 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.761 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.761 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.761 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.761 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.761 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.761 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.761 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.761 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.761 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.761 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.761 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.761 "name": "Existed_Raid", 00:13:55.761 "uuid": "f86d7556-6aca-4e7b-aed3-462a6b9d52fe", 00:13:55.761 "strip_size_kb": 0, 00:13:55.761 "state": "configuring", 00:13:55.761 "raid_level": "raid1", 00:13:55.761 "superblock": true, 00:13:55.761 "num_base_bdevs": 4, 00:13:55.761 "num_base_bdevs_discovered": 3, 00:13:55.761 "num_base_bdevs_operational": 4, 00:13:55.761 "base_bdevs_list": [ 00:13:55.761 { 00:13:55.761 "name": null, 00:13:55.761 "uuid": "c7178009-46a1-4e23-b493-f4a5ba84001a", 00:13:55.761 "is_configured": false, 00:13:55.761 "data_offset": 0, 00:13:55.761 "data_size": 63488 00:13:55.761 }, 00:13:55.761 { 00:13:55.761 "name": "BaseBdev2", 00:13:55.761 "uuid": "e64f7578-9c35-4a1d-8307-f6a0bb071028", 00:13:55.761 "is_configured": true, 00:13:55.761 "data_offset": 2048, 00:13:55.761 "data_size": 63488 00:13:55.761 }, 00:13:55.761 { 00:13:55.761 "name": "BaseBdev3", 00:13:55.761 "uuid": "01083162-6d49-4d41-b38a-ad7472830fea", 00:13:55.761 "is_configured": true, 00:13:55.761 "data_offset": 2048, 00:13:55.761 "data_size": 63488 00:13:55.761 }, 00:13:55.761 { 00:13:55.762 "name": "BaseBdev4", 00:13:55.762 "uuid": "9cfc3f91-0844-43d1-83bd-0a83f42f509a", 00:13:55.762 "is_configured": true, 00:13:55.762 "data_offset": 2048, 00:13:55.762 "data_size": 63488 00:13:55.762 } 00:13:55.762 ] 00:13:55.762 }' 00:13:55.762 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.762 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.020 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.020 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:56.020 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.020 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.278 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.278 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:56.278 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.278 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c7178009-46a1-4e23-b493-f4a5ba84001a 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.279 [2024-11-27 04:30:52.749728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:56.279 [2024-11-27 04:30:52.750051] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:56.279 [2024-11-27 04:30:52.750072] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:56.279 [2024-11-27 04:30:52.750400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:56.279 NewBaseBdev 00:13:56.279 [2024-11-27 04:30:52.750590] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:56.279 [2024-11-27 04:30:52.750608] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:56.279 [2024-11-27 04:30:52.750791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.279 [ 00:13:56.279 { 00:13:56.279 "name": "NewBaseBdev", 00:13:56.279 "aliases": [ 00:13:56.279 "c7178009-46a1-4e23-b493-f4a5ba84001a" 00:13:56.279 ], 00:13:56.279 "product_name": "Malloc disk", 00:13:56.279 "block_size": 512, 00:13:56.279 "num_blocks": 65536, 00:13:56.279 "uuid": "c7178009-46a1-4e23-b493-f4a5ba84001a", 00:13:56.279 "assigned_rate_limits": { 00:13:56.279 "rw_ios_per_sec": 0, 00:13:56.279 "rw_mbytes_per_sec": 0, 00:13:56.279 "r_mbytes_per_sec": 0, 00:13:56.279 "w_mbytes_per_sec": 0 00:13:56.279 }, 00:13:56.279 "claimed": true, 00:13:56.279 "claim_type": "exclusive_write", 00:13:56.279 "zoned": false, 00:13:56.279 "supported_io_types": { 00:13:56.279 "read": true, 00:13:56.279 "write": true, 00:13:56.279 "unmap": true, 00:13:56.279 "flush": true, 00:13:56.279 "reset": true, 00:13:56.279 "nvme_admin": false, 00:13:56.279 "nvme_io": false, 00:13:56.279 "nvme_io_md": false, 00:13:56.279 "write_zeroes": true, 00:13:56.279 "zcopy": true, 00:13:56.279 "get_zone_info": false, 00:13:56.279 "zone_management": false, 00:13:56.279 "zone_append": false, 00:13:56.279 "compare": false, 00:13:56.279 "compare_and_write": false, 00:13:56.279 "abort": true, 00:13:56.279 "seek_hole": false, 00:13:56.279 "seek_data": false, 00:13:56.279 "copy": true, 00:13:56.279 "nvme_iov_md": false 00:13:56.279 }, 00:13:56.279 "memory_domains": [ 00:13:56.279 { 00:13:56.279 "dma_device_id": "system", 00:13:56.279 "dma_device_type": 1 00:13:56.279 }, 00:13:56.279 { 00:13:56.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.279 "dma_device_type": 2 00:13:56.279 } 00:13:56.279 ], 00:13:56.279 "driver_specific": {} 00:13:56.279 } 00:13:56.279 ] 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.279 "name": "Existed_Raid", 00:13:56.279 "uuid": "f86d7556-6aca-4e7b-aed3-462a6b9d52fe", 00:13:56.279 "strip_size_kb": 0, 00:13:56.279 "state": "online", 00:13:56.279 "raid_level": "raid1", 00:13:56.279 "superblock": true, 00:13:56.279 "num_base_bdevs": 4, 00:13:56.279 "num_base_bdevs_discovered": 4, 00:13:56.279 "num_base_bdevs_operational": 4, 00:13:56.279 "base_bdevs_list": [ 00:13:56.279 { 00:13:56.279 "name": "NewBaseBdev", 00:13:56.279 "uuid": "c7178009-46a1-4e23-b493-f4a5ba84001a", 00:13:56.279 "is_configured": true, 00:13:56.279 "data_offset": 2048, 00:13:56.279 "data_size": 63488 00:13:56.279 }, 00:13:56.279 { 00:13:56.279 "name": "BaseBdev2", 00:13:56.279 "uuid": "e64f7578-9c35-4a1d-8307-f6a0bb071028", 00:13:56.279 "is_configured": true, 00:13:56.279 "data_offset": 2048, 00:13:56.279 "data_size": 63488 00:13:56.279 }, 00:13:56.279 { 00:13:56.279 "name": "BaseBdev3", 00:13:56.279 "uuid": "01083162-6d49-4d41-b38a-ad7472830fea", 00:13:56.279 "is_configured": true, 00:13:56.279 "data_offset": 2048, 00:13:56.279 "data_size": 63488 00:13:56.279 }, 00:13:56.279 { 00:13:56.279 "name": "BaseBdev4", 00:13:56.279 "uuid": "9cfc3f91-0844-43d1-83bd-0a83f42f509a", 00:13:56.279 "is_configured": true, 00:13:56.279 "data_offset": 2048, 00:13:56.279 "data_size": 63488 00:13:56.279 } 00:13:56.279 ] 00:13:56.279 }' 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.279 04:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.848 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:56.848 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:56.848 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:56.848 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:56.848 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:56.848 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:56.848 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:56.848 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:56.848 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.848 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.848 [2024-11-27 04:30:53.201397] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:56.848 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.848 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:56.848 "name": "Existed_Raid", 00:13:56.848 "aliases": [ 00:13:56.848 "f86d7556-6aca-4e7b-aed3-462a6b9d52fe" 00:13:56.848 ], 00:13:56.848 "product_name": "Raid Volume", 00:13:56.848 "block_size": 512, 00:13:56.848 "num_blocks": 63488, 00:13:56.848 "uuid": "f86d7556-6aca-4e7b-aed3-462a6b9d52fe", 00:13:56.848 "assigned_rate_limits": { 00:13:56.848 "rw_ios_per_sec": 0, 00:13:56.848 "rw_mbytes_per_sec": 0, 00:13:56.848 "r_mbytes_per_sec": 0, 00:13:56.848 "w_mbytes_per_sec": 0 00:13:56.849 }, 00:13:56.849 "claimed": false, 00:13:56.849 "zoned": false, 00:13:56.849 "supported_io_types": { 00:13:56.849 "read": true, 00:13:56.849 "write": true, 00:13:56.849 "unmap": false, 00:13:56.849 "flush": false, 00:13:56.849 "reset": true, 00:13:56.849 "nvme_admin": false, 00:13:56.849 "nvme_io": false, 00:13:56.849 "nvme_io_md": false, 00:13:56.849 "write_zeroes": true, 00:13:56.849 "zcopy": false, 00:13:56.849 "get_zone_info": false, 00:13:56.849 "zone_management": false, 00:13:56.849 "zone_append": false, 00:13:56.849 "compare": false, 00:13:56.849 "compare_and_write": false, 00:13:56.849 "abort": false, 00:13:56.849 "seek_hole": false, 00:13:56.849 "seek_data": false, 00:13:56.849 "copy": false, 00:13:56.849 "nvme_iov_md": false 00:13:56.849 }, 00:13:56.849 "memory_domains": [ 00:13:56.849 { 00:13:56.849 "dma_device_id": "system", 00:13:56.849 "dma_device_type": 1 00:13:56.849 }, 00:13:56.849 { 00:13:56.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.849 "dma_device_type": 2 00:13:56.849 }, 00:13:56.849 { 00:13:56.849 "dma_device_id": "system", 00:13:56.849 "dma_device_type": 1 00:13:56.849 }, 00:13:56.849 { 00:13:56.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.849 "dma_device_type": 2 00:13:56.849 }, 00:13:56.849 { 00:13:56.849 "dma_device_id": "system", 00:13:56.849 "dma_device_type": 1 00:13:56.849 }, 00:13:56.849 { 00:13:56.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.849 "dma_device_type": 2 00:13:56.849 }, 00:13:56.849 { 00:13:56.849 "dma_device_id": "system", 00:13:56.849 "dma_device_type": 1 00:13:56.849 }, 00:13:56.849 { 00:13:56.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.849 "dma_device_type": 2 00:13:56.849 } 00:13:56.849 ], 00:13:56.849 "driver_specific": { 00:13:56.849 "raid": { 00:13:56.849 "uuid": "f86d7556-6aca-4e7b-aed3-462a6b9d52fe", 00:13:56.849 "strip_size_kb": 0, 00:13:56.849 "state": "online", 00:13:56.849 "raid_level": "raid1", 00:13:56.849 "superblock": true, 00:13:56.849 "num_base_bdevs": 4, 00:13:56.849 "num_base_bdevs_discovered": 4, 00:13:56.849 "num_base_bdevs_operational": 4, 00:13:56.849 "base_bdevs_list": [ 00:13:56.849 { 00:13:56.849 "name": "NewBaseBdev", 00:13:56.849 "uuid": "c7178009-46a1-4e23-b493-f4a5ba84001a", 00:13:56.849 "is_configured": true, 00:13:56.849 "data_offset": 2048, 00:13:56.849 "data_size": 63488 00:13:56.849 }, 00:13:56.849 { 00:13:56.849 "name": "BaseBdev2", 00:13:56.849 "uuid": "e64f7578-9c35-4a1d-8307-f6a0bb071028", 00:13:56.849 "is_configured": true, 00:13:56.849 "data_offset": 2048, 00:13:56.849 "data_size": 63488 00:13:56.849 }, 00:13:56.849 { 00:13:56.849 "name": "BaseBdev3", 00:13:56.849 "uuid": "01083162-6d49-4d41-b38a-ad7472830fea", 00:13:56.849 "is_configured": true, 00:13:56.849 "data_offset": 2048, 00:13:56.849 "data_size": 63488 00:13:56.849 }, 00:13:56.849 { 00:13:56.849 "name": "BaseBdev4", 00:13:56.849 "uuid": "9cfc3f91-0844-43d1-83bd-0a83f42f509a", 00:13:56.849 "is_configured": true, 00:13:56.849 "data_offset": 2048, 00:13:56.849 "data_size": 63488 00:13:56.849 } 00:13:56.849 ] 00:13:56.849 } 00:13:56.849 } 00:13:56.849 }' 00:13:56.849 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:56.849 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:56.849 BaseBdev2 00:13:56.849 BaseBdev3 00:13:56.849 BaseBdev4' 00:13:56.849 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.849 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:56.849 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:56.849 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:56.849 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.849 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.849 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.849 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.849 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:56.849 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:56.849 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:56.849 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.849 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:56.849 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.849 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.849 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.849 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:56.849 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:56.849 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:56.849 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:56.849 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.849 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.849 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.849 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.108 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:57.108 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:57.108 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:57.108 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:57.108 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.108 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.108 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.108 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.108 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:57.108 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:57.108 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:57.108 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.108 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.108 [2024-11-27 04:30:53.520448] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:57.108 [2024-11-27 04:30:53.520484] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:57.108 [2024-11-27 04:30:53.520587] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:57.108 [2024-11-27 04:30:53.520960] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:57.108 [2024-11-27 04:30:53.520976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:57.108 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.108 04:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74151 00:13:57.108 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74151 ']' 00:13:57.108 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74151 00:13:57.108 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:57.108 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:57.108 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74151 00:13:57.108 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:57.108 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:57.108 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74151' 00:13:57.108 killing process with pid 74151 00:13:57.108 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74151 00:13:57.108 [2024-11-27 04:30:53.564693] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:57.108 04:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74151 00:13:57.677 [2024-11-27 04:30:54.032813] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:59.055 04:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:59.055 00:13:59.055 real 0m12.134s 00:13:59.055 user 0m18.786s 00:13:59.055 sys 0m2.383s 00:13:59.055 04:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:59.055 04:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.055 ************************************ 00:13:59.055 END TEST raid_state_function_test_sb 00:13:59.055 ************************************ 00:13:59.055 04:30:55 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:13:59.055 04:30:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:59.055 04:30:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:59.055 04:30:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:59.055 ************************************ 00:13:59.055 START TEST raid_superblock_test 00:13:59.055 ************************************ 00:13:59.055 04:30:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:13:59.055 04:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:59.055 04:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:59.055 04:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:59.055 04:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:59.055 04:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:59.055 04:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:59.055 04:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:59.055 04:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:59.055 04:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:59.055 04:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:59.055 04:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:59.055 04:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:59.055 04:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:59.055 04:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:59.055 04:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:59.055 04:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74828 00:13:59.055 04:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:59.055 04:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74828 00:13:59.055 04:30:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74828 ']' 00:13:59.055 04:30:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.055 04:30:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:59.055 04:30:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.055 04:30:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:59.055 04:30:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.055 [2024-11-27 04:30:55.525498] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:13:59.055 [2024-11-27 04:30:55.525704] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74828 ] 00:13:59.315 [2024-11-27 04:30:55.682125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.315 [2024-11-27 04:30:55.833489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.575 [2024-11-27 04:30:56.083503] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.575 [2024-11-27 04:30:56.083663] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.835 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.835 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:59.835 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:59.835 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:59.835 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:59.835 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:59.835 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:59.835 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:59.835 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:59.835 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:59.835 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:59.835 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.835 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.095 malloc1 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.095 [2024-11-27 04:30:56.463366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:00.095 [2024-11-27 04:30:56.463526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.095 [2024-11-27 04:30:56.463579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:00.095 [2024-11-27 04:30:56.463621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.095 [2024-11-27 04:30:56.466535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.095 [2024-11-27 04:30:56.466616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:00.095 pt1 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.095 malloc2 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.095 [2024-11-27 04:30:56.532402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:00.095 [2024-11-27 04:30:56.532533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.095 [2024-11-27 04:30:56.532590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:00.095 [2024-11-27 04:30:56.532628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.095 [2024-11-27 04:30:56.535238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.095 [2024-11-27 04:30:56.535315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:00.095 pt2 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.095 malloc3 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.095 [2024-11-27 04:30:56.617434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:00.095 [2024-11-27 04:30:56.617513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.095 [2024-11-27 04:30:56.617544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:00.095 [2024-11-27 04:30:56.617555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.095 [2024-11-27 04:30:56.620511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.095 [2024-11-27 04:30:56.620632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:00.095 pt3 00:14:00.095 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.096 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:00.096 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:00.096 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:00.096 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:00.096 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:00.096 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:00.096 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:00.096 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:00.096 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:00.096 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.096 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.357 malloc4 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.357 [2024-11-27 04:30:56.690127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:00.357 [2024-11-27 04:30:56.690288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.357 [2024-11-27 04:30:56.690342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:00.357 [2024-11-27 04:30:56.690385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.357 [2024-11-27 04:30:56.693181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.357 [2024-11-27 04:30:56.693257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:00.357 pt4 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.357 [2024-11-27 04:30:56.702177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:00.357 [2024-11-27 04:30:56.704526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:00.357 [2024-11-27 04:30:56.704663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:00.357 [2024-11-27 04:30:56.704765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:00.357 [2024-11-27 04:30:56.705055] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:00.357 [2024-11-27 04:30:56.705128] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:00.357 [2024-11-27 04:30:56.705496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:00.357 [2024-11-27 04:30:56.705750] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:00.357 [2024-11-27 04:30:56.705799] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:00.357 [2024-11-27 04:30:56.706101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.357 "name": "raid_bdev1", 00:14:00.357 "uuid": "eb671543-749b-47bf-9b78-d5ddf1818168", 00:14:00.357 "strip_size_kb": 0, 00:14:00.357 "state": "online", 00:14:00.357 "raid_level": "raid1", 00:14:00.357 "superblock": true, 00:14:00.357 "num_base_bdevs": 4, 00:14:00.357 "num_base_bdevs_discovered": 4, 00:14:00.357 "num_base_bdevs_operational": 4, 00:14:00.357 "base_bdevs_list": [ 00:14:00.357 { 00:14:00.357 "name": "pt1", 00:14:00.357 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:00.357 "is_configured": true, 00:14:00.357 "data_offset": 2048, 00:14:00.357 "data_size": 63488 00:14:00.357 }, 00:14:00.357 { 00:14:00.357 "name": "pt2", 00:14:00.357 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:00.357 "is_configured": true, 00:14:00.357 "data_offset": 2048, 00:14:00.357 "data_size": 63488 00:14:00.357 }, 00:14:00.357 { 00:14:00.357 "name": "pt3", 00:14:00.357 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:00.357 "is_configured": true, 00:14:00.357 "data_offset": 2048, 00:14:00.357 "data_size": 63488 00:14:00.357 }, 00:14:00.357 { 00:14:00.357 "name": "pt4", 00:14:00.357 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:00.357 "is_configured": true, 00:14:00.357 "data_offset": 2048, 00:14:00.357 "data_size": 63488 00:14:00.357 } 00:14:00.357 ] 00:14:00.357 }' 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.357 04:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.646 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:00.646 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:00.646 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:00.646 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:00.646 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:00.646 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:00.646 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:00.646 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.646 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:00.646 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.646 [2024-11-27 04:30:57.205736] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:00.953 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.953 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:00.953 "name": "raid_bdev1", 00:14:00.953 "aliases": [ 00:14:00.953 "eb671543-749b-47bf-9b78-d5ddf1818168" 00:14:00.953 ], 00:14:00.953 "product_name": "Raid Volume", 00:14:00.953 "block_size": 512, 00:14:00.953 "num_blocks": 63488, 00:14:00.953 "uuid": "eb671543-749b-47bf-9b78-d5ddf1818168", 00:14:00.953 "assigned_rate_limits": { 00:14:00.953 "rw_ios_per_sec": 0, 00:14:00.953 "rw_mbytes_per_sec": 0, 00:14:00.953 "r_mbytes_per_sec": 0, 00:14:00.953 "w_mbytes_per_sec": 0 00:14:00.953 }, 00:14:00.953 "claimed": false, 00:14:00.953 "zoned": false, 00:14:00.953 "supported_io_types": { 00:14:00.953 "read": true, 00:14:00.953 "write": true, 00:14:00.953 "unmap": false, 00:14:00.953 "flush": false, 00:14:00.953 "reset": true, 00:14:00.953 "nvme_admin": false, 00:14:00.953 "nvme_io": false, 00:14:00.953 "nvme_io_md": false, 00:14:00.953 "write_zeroes": true, 00:14:00.953 "zcopy": false, 00:14:00.953 "get_zone_info": false, 00:14:00.953 "zone_management": false, 00:14:00.953 "zone_append": false, 00:14:00.953 "compare": false, 00:14:00.953 "compare_and_write": false, 00:14:00.953 "abort": false, 00:14:00.953 "seek_hole": false, 00:14:00.953 "seek_data": false, 00:14:00.953 "copy": false, 00:14:00.953 "nvme_iov_md": false 00:14:00.953 }, 00:14:00.953 "memory_domains": [ 00:14:00.953 { 00:14:00.953 "dma_device_id": "system", 00:14:00.953 "dma_device_type": 1 00:14:00.953 }, 00:14:00.953 { 00:14:00.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.953 "dma_device_type": 2 00:14:00.953 }, 00:14:00.953 { 00:14:00.953 "dma_device_id": "system", 00:14:00.953 "dma_device_type": 1 00:14:00.953 }, 00:14:00.953 { 00:14:00.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.953 "dma_device_type": 2 00:14:00.953 }, 00:14:00.953 { 00:14:00.953 "dma_device_id": "system", 00:14:00.953 "dma_device_type": 1 00:14:00.953 }, 00:14:00.953 { 00:14:00.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.953 "dma_device_type": 2 00:14:00.953 }, 00:14:00.953 { 00:14:00.953 "dma_device_id": "system", 00:14:00.953 "dma_device_type": 1 00:14:00.953 }, 00:14:00.953 { 00:14:00.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.953 "dma_device_type": 2 00:14:00.953 } 00:14:00.953 ], 00:14:00.953 "driver_specific": { 00:14:00.953 "raid": { 00:14:00.953 "uuid": "eb671543-749b-47bf-9b78-d5ddf1818168", 00:14:00.953 "strip_size_kb": 0, 00:14:00.953 "state": "online", 00:14:00.953 "raid_level": "raid1", 00:14:00.953 "superblock": true, 00:14:00.953 "num_base_bdevs": 4, 00:14:00.953 "num_base_bdevs_discovered": 4, 00:14:00.953 "num_base_bdevs_operational": 4, 00:14:00.953 "base_bdevs_list": [ 00:14:00.953 { 00:14:00.953 "name": "pt1", 00:14:00.953 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:00.953 "is_configured": true, 00:14:00.953 "data_offset": 2048, 00:14:00.953 "data_size": 63488 00:14:00.953 }, 00:14:00.953 { 00:14:00.953 "name": "pt2", 00:14:00.953 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:00.953 "is_configured": true, 00:14:00.953 "data_offset": 2048, 00:14:00.953 "data_size": 63488 00:14:00.953 }, 00:14:00.953 { 00:14:00.953 "name": "pt3", 00:14:00.953 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:00.953 "is_configured": true, 00:14:00.953 "data_offset": 2048, 00:14:00.953 "data_size": 63488 00:14:00.953 }, 00:14:00.953 { 00:14:00.953 "name": "pt4", 00:14:00.953 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:00.953 "is_configured": true, 00:14:00.953 "data_offset": 2048, 00:14:00.953 "data_size": 63488 00:14:00.953 } 00:14:00.953 ] 00:14:00.953 } 00:14:00.953 } 00:14:00.953 }' 00:14:00.953 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:00.953 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:00.953 pt2 00:14:00.953 pt3 00:14:00.953 pt4' 00:14:00.953 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.953 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:00.953 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.953 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.954 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.954 [2024-11-27 04:30:57.525066] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=eb671543-749b-47bf-9b78-d5ddf1818168 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z eb671543-749b-47bf-9b78-d5ddf1818168 ']' 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.214 [2024-11-27 04:30:57.568664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:01.214 [2024-11-27 04:30:57.568776] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:01.214 [2024-11-27 04:30:57.568889] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:01.214 [2024-11-27 04:30:57.569000] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:01.214 [2024-11-27 04:30:57.569017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.214 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.215 [2024-11-27 04:30:57.732437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:01.215 [2024-11-27 04:30:57.734395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:01.215 [2024-11-27 04:30:57.734488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:01.215 [2024-11-27 04:30:57.734543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:01.215 [2024-11-27 04:30:57.734624] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:01.215 [2024-11-27 04:30:57.734725] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:01.215 [2024-11-27 04:30:57.734812] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:01.215 [2024-11-27 04:30:57.734882] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:01.215 [2024-11-27 04:30:57.734931] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:01.215 [2024-11-27 04:30:57.734991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:01.215 request: 00:14:01.215 { 00:14:01.215 "name": "raid_bdev1", 00:14:01.215 "raid_level": "raid1", 00:14:01.215 "base_bdevs": [ 00:14:01.215 "malloc1", 00:14:01.215 "malloc2", 00:14:01.215 "malloc3", 00:14:01.215 "malloc4" 00:14:01.215 ], 00:14:01.215 "superblock": false, 00:14:01.215 "method": "bdev_raid_create", 00:14:01.215 "req_id": 1 00:14:01.215 } 00:14:01.215 Got JSON-RPC error response 00:14:01.215 response: 00:14:01.215 { 00:14:01.215 "code": -17, 00:14:01.215 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:01.215 } 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.215 [2024-11-27 04:30:57.788310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:01.215 [2024-11-27 04:30:57.788476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.215 [2024-11-27 04:30:57.788516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:01.215 [2024-11-27 04:30:57.788551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.215 [2024-11-27 04:30:57.790976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.215 [2024-11-27 04:30:57.791070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:01.215 [2024-11-27 04:30:57.791216] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:01.215 [2024-11-27 04:30:57.791313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:01.215 pt1 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.215 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.475 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.475 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.475 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.475 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.475 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.475 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.475 "name": "raid_bdev1", 00:14:01.476 "uuid": "eb671543-749b-47bf-9b78-d5ddf1818168", 00:14:01.476 "strip_size_kb": 0, 00:14:01.476 "state": "configuring", 00:14:01.476 "raid_level": "raid1", 00:14:01.476 "superblock": true, 00:14:01.476 "num_base_bdevs": 4, 00:14:01.476 "num_base_bdevs_discovered": 1, 00:14:01.476 "num_base_bdevs_operational": 4, 00:14:01.476 "base_bdevs_list": [ 00:14:01.476 { 00:14:01.476 "name": "pt1", 00:14:01.476 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:01.476 "is_configured": true, 00:14:01.476 "data_offset": 2048, 00:14:01.476 "data_size": 63488 00:14:01.476 }, 00:14:01.476 { 00:14:01.476 "name": null, 00:14:01.476 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:01.476 "is_configured": false, 00:14:01.476 "data_offset": 2048, 00:14:01.476 "data_size": 63488 00:14:01.476 }, 00:14:01.476 { 00:14:01.476 "name": null, 00:14:01.476 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:01.476 "is_configured": false, 00:14:01.476 "data_offset": 2048, 00:14:01.476 "data_size": 63488 00:14:01.476 }, 00:14:01.476 { 00:14:01.476 "name": null, 00:14:01.476 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:01.476 "is_configured": false, 00:14:01.476 "data_offset": 2048, 00:14:01.476 "data_size": 63488 00:14:01.476 } 00:14:01.476 ] 00:14:01.476 }' 00:14:01.476 04:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.476 04:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.736 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:01.736 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:01.736 04:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.736 04:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.736 [2024-11-27 04:30:58.191697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:01.736 [2024-11-27 04:30:58.191799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.736 [2024-11-27 04:30:58.191825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:01.736 [2024-11-27 04:30:58.191837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.736 [2024-11-27 04:30:58.192303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.737 [2024-11-27 04:30:58.192326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:01.737 [2024-11-27 04:30:58.192413] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:01.737 [2024-11-27 04:30:58.192441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:01.737 pt2 00:14:01.737 04:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.737 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:01.737 04:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.737 04:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.737 [2024-11-27 04:30:58.203713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:01.737 04:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.737 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:14:01.737 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.737 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.737 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.737 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.737 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:01.737 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.737 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.737 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.737 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.737 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.737 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.737 04:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.737 04:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.737 04:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.737 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.737 "name": "raid_bdev1", 00:14:01.737 "uuid": "eb671543-749b-47bf-9b78-d5ddf1818168", 00:14:01.737 "strip_size_kb": 0, 00:14:01.737 "state": "configuring", 00:14:01.737 "raid_level": "raid1", 00:14:01.737 "superblock": true, 00:14:01.737 "num_base_bdevs": 4, 00:14:01.737 "num_base_bdevs_discovered": 1, 00:14:01.737 "num_base_bdevs_operational": 4, 00:14:01.737 "base_bdevs_list": [ 00:14:01.737 { 00:14:01.737 "name": "pt1", 00:14:01.737 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:01.737 "is_configured": true, 00:14:01.737 "data_offset": 2048, 00:14:01.737 "data_size": 63488 00:14:01.737 }, 00:14:01.737 { 00:14:01.737 "name": null, 00:14:01.737 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:01.737 "is_configured": false, 00:14:01.737 "data_offset": 0, 00:14:01.737 "data_size": 63488 00:14:01.737 }, 00:14:01.737 { 00:14:01.737 "name": null, 00:14:01.737 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:01.737 "is_configured": false, 00:14:01.737 "data_offset": 2048, 00:14:01.737 "data_size": 63488 00:14:01.737 }, 00:14:01.737 { 00:14:01.737 "name": null, 00:14:01.737 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:01.737 "is_configured": false, 00:14:01.737 "data_offset": 2048, 00:14:01.737 "data_size": 63488 00:14:01.737 } 00:14:01.737 ] 00:14:01.737 }' 00:14:01.737 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.737 04:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.307 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:02.307 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:02.307 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:02.307 04:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.307 04:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.307 [2024-11-27 04:30:58.702901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:02.307 [2024-11-27 04:30:58.702991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.307 [2024-11-27 04:30:58.703013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:02.307 [2024-11-27 04:30:58.703023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.307 [2024-11-27 04:30:58.703543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.307 [2024-11-27 04:30:58.703572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:02.307 [2024-11-27 04:30:58.703660] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:02.307 [2024-11-27 04:30:58.703685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:02.307 pt2 00:14:02.307 04:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.307 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:02.307 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:02.307 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:02.307 04:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.307 04:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.307 [2024-11-27 04:30:58.714867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:02.307 [2024-11-27 04:30:58.715029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.307 [2024-11-27 04:30:58.715070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:02.307 [2024-11-27 04:30:58.715126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.307 [2024-11-27 04:30:58.715611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.307 [2024-11-27 04:30:58.715676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:02.307 [2024-11-27 04:30:58.715789] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:02.307 [2024-11-27 04:30:58.715844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:02.307 pt3 00:14:02.307 04:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.307 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:02.307 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:02.307 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:02.307 04:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.307 04:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.307 [2024-11-27 04:30:58.726809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:02.307 [2024-11-27 04:30:58.726913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.307 [2024-11-27 04:30:58.726951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:02.307 [2024-11-27 04:30:58.726984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.307 [2024-11-27 04:30:58.727414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.307 [2024-11-27 04:30:58.727503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:02.307 [2024-11-27 04:30:58.727604] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:02.307 [2024-11-27 04:30:58.727662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:02.307 [2024-11-27 04:30:58.727891] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:02.307 [2024-11-27 04:30:58.727934] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:02.307 [2024-11-27 04:30:58.728226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:02.307 [2024-11-27 04:30:58.728433] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:02.307 [2024-11-27 04:30:58.728483] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:02.307 [2024-11-27 04:30:58.728661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.307 pt4 00:14:02.307 04:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.307 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:02.307 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:02.308 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:02.308 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.308 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.308 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.308 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.308 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.308 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.308 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.308 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.308 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.308 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.308 04:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.308 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.308 04:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.308 04:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.308 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.308 "name": "raid_bdev1", 00:14:02.308 "uuid": "eb671543-749b-47bf-9b78-d5ddf1818168", 00:14:02.308 "strip_size_kb": 0, 00:14:02.308 "state": "online", 00:14:02.308 "raid_level": "raid1", 00:14:02.308 "superblock": true, 00:14:02.308 "num_base_bdevs": 4, 00:14:02.308 "num_base_bdevs_discovered": 4, 00:14:02.308 "num_base_bdevs_operational": 4, 00:14:02.308 "base_bdevs_list": [ 00:14:02.308 { 00:14:02.308 "name": "pt1", 00:14:02.308 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:02.308 "is_configured": true, 00:14:02.308 "data_offset": 2048, 00:14:02.308 "data_size": 63488 00:14:02.308 }, 00:14:02.308 { 00:14:02.308 "name": "pt2", 00:14:02.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:02.308 "is_configured": true, 00:14:02.308 "data_offset": 2048, 00:14:02.308 "data_size": 63488 00:14:02.308 }, 00:14:02.308 { 00:14:02.308 "name": "pt3", 00:14:02.308 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:02.308 "is_configured": true, 00:14:02.308 "data_offset": 2048, 00:14:02.308 "data_size": 63488 00:14:02.308 }, 00:14:02.308 { 00:14:02.308 "name": "pt4", 00:14:02.308 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:02.308 "is_configured": true, 00:14:02.308 "data_offset": 2048, 00:14:02.308 "data_size": 63488 00:14:02.308 } 00:14:02.308 ] 00:14:02.308 }' 00:14:02.308 04:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.308 04:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.878 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:02.878 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:02.878 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:02.878 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:02.878 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:02.878 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:02.878 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:02.878 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:02.878 04:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.878 04:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.878 [2024-11-27 04:30:59.190523] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:02.878 04:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.878 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:02.878 "name": "raid_bdev1", 00:14:02.878 "aliases": [ 00:14:02.878 "eb671543-749b-47bf-9b78-d5ddf1818168" 00:14:02.878 ], 00:14:02.878 "product_name": "Raid Volume", 00:14:02.878 "block_size": 512, 00:14:02.878 "num_blocks": 63488, 00:14:02.878 "uuid": "eb671543-749b-47bf-9b78-d5ddf1818168", 00:14:02.878 "assigned_rate_limits": { 00:14:02.878 "rw_ios_per_sec": 0, 00:14:02.878 "rw_mbytes_per_sec": 0, 00:14:02.878 "r_mbytes_per_sec": 0, 00:14:02.878 "w_mbytes_per_sec": 0 00:14:02.878 }, 00:14:02.878 "claimed": false, 00:14:02.879 "zoned": false, 00:14:02.879 "supported_io_types": { 00:14:02.879 "read": true, 00:14:02.879 "write": true, 00:14:02.879 "unmap": false, 00:14:02.879 "flush": false, 00:14:02.879 "reset": true, 00:14:02.879 "nvme_admin": false, 00:14:02.879 "nvme_io": false, 00:14:02.879 "nvme_io_md": false, 00:14:02.879 "write_zeroes": true, 00:14:02.879 "zcopy": false, 00:14:02.879 "get_zone_info": false, 00:14:02.879 "zone_management": false, 00:14:02.879 "zone_append": false, 00:14:02.879 "compare": false, 00:14:02.879 "compare_and_write": false, 00:14:02.879 "abort": false, 00:14:02.879 "seek_hole": false, 00:14:02.879 "seek_data": false, 00:14:02.879 "copy": false, 00:14:02.879 "nvme_iov_md": false 00:14:02.879 }, 00:14:02.879 "memory_domains": [ 00:14:02.879 { 00:14:02.879 "dma_device_id": "system", 00:14:02.879 "dma_device_type": 1 00:14:02.879 }, 00:14:02.879 { 00:14:02.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.879 "dma_device_type": 2 00:14:02.879 }, 00:14:02.879 { 00:14:02.879 "dma_device_id": "system", 00:14:02.879 "dma_device_type": 1 00:14:02.879 }, 00:14:02.879 { 00:14:02.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.879 "dma_device_type": 2 00:14:02.879 }, 00:14:02.879 { 00:14:02.879 "dma_device_id": "system", 00:14:02.879 "dma_device_type": 1 00:14:02.879 }, 00:14:02.879 { 00:14:02.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.879 "dma_device_type": 2 00:14:02.879 }, 00:14:02.879 { 00:14:02.879 "dma_device_id": "system", 00:14:02.879 "dma_device_type": 1 00:14:02.879 }, 00:14:02.879 { 00:14:02.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.879 "dma_device_type": 2 00:14:02.879 } 00:14:02.879 ], 00:14:02.879 "driver_specific": { 00:14:02.879 "raid": { 00:14:02.879 "uuid": "eb671543-749b-47bf-9b78-d5ddf1818168", 00:14:02.879 "strip_size_kb": 0, 00:14:02.879 "state": "online", 00:14:02.879 "raid_level": "raid1", 00:14:02.879 "superblock": true, 00:14:02.879 "num_base_bdevs": 4, 00:14:02.879 "num_base_bdevs_discovered": 4, 00:14:02.879 "num_base_bdevs_operational": 4, 00:14:02.879 "base_bdevs_list": [ 00:14:02.879 { 00:14:02.879 "name": "pt1", 00:14:02.879 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:02.879 "is_configured": true, 00:14:02.879 "data_offset": 2048, 00:14:02.879 "data_size": 63488 00:14:02.879 }, 00:14:02.879 { 00:14:02.879 "name": "pt2", 00:14:02.879 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:02.879 "is_configured": true, 00:14:02.879 "data_offset": 2048, 00:14:02.879 "data_size": 63488 00:14:02.879 }, 00:14:02.879 { 00:14:02.879 "name": "pt3", 00:14:02.879 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:02.879 "is_configured": true, 00:14:02.879 "data_offset": 2048, 00:14:02.879 "data_size": 63488 00:14:02.879 }, 00:14:02.879 { 00:14:02.879 "name": "pt4", 00:14:02.879 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:02.879 "is_configured": true, 00:14:02.879 "data_offset": 2048, 00:14:02.879 "data_size": 63488 00:14:02.879 } 00:14:02.879 ] 00:14:02.879 } 00:14:02.879 } 00:14:02.879 }' 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:02.879 pt2 00:14:02.879 pt3 00:14:02.879 pt4' 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.879 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.139 [2024-11-27 04:30:59.517908] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' eb671543-749b-47bf-9b78-d5ddf1818168 '!=' eb671543-749b-47bf-9b78-d5ddf1818168 ']' 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.139 [2024-11-27 04:30:59.545585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.139 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.140 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.140 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.140 04:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.140 04:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.140 04:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.140 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.140 "name": "raid_bdev1", 00:14:03.140 "uuid": "eb671543-749b-47bf-9b78-d5ddf1818168", 00:14:03.140 "strip_size_kb": 0, 00:14:03.140 "state": "online", 00:14:03.140 "raid_level": "raid1", 00:14:03.140 "superblock": true, 00:14:03.140 "num_base_bdevs": 4, 00:14:03.140 "num_base_bdevs_discovered": 3, 00:14:03.140 "num_base_bdevs_operational": 3, 00:14:03.140 "base_bdevs_list": [ 00:14:03.140 { 00:14:03.140 "name": null, 00:14:03.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.140 "is_configured": false, 00:14:03.140 "data_offset": 0, 00:14:03.140 "data_size": 63488 00:14:03.140 }, 00:14:03.140 { 00:14:03.140 "name": "pt2", 00:14:03.140 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:03.140 "is_configured": true, 00:14:03.140 "data_offset": 2048, 00:14:03.140 "data_size": 63488 00:14:03.140 }, 00:14:03.140 { 00:14:03.140 "name": "pt3", 00:14:03.140 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:03.140 "is_configured": true, 00:14:03.140 "data_offset": 2048, 00:14:03.140 "data_size": 63488 00:14:03.140 }, 00:14:03.140 { 00:14:03.140 "name": "pt4", 00:14:03.140 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:03.140 "is_configured": true, 00:14:03.140 "data_offset": 2048, 00:14:03.140 "data_size": 63488 00:14:03.140 } 00:14:03.140 ] 00:14:03.140 }' 00:14:03.140 04:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.140 04:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.709 [2024-11-27 04:31:00.012791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:03.709 [2024-11-27 04:31:00.012896] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:03.709 [2024-11-27 04:31:00.013038] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.709 [2024-11-27 04:31:00.013187] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.709 [2024-11-27 04:31:00.013240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.709 [2024-11-27 04:31:00.108575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:03.709 [2024-11-27 04:31:00.108641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.709 [2024-11-27 04:31:00.108663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:03.709 [2024-11-27 04:31:00.108673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.709 [2024-11-27 04:31:00.111391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.709 [2024-11-27 04:31:00.111514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:03.709 [2024-11-27 04:31:00.111637] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:03.709 [2024-11-27 04:31:00.111699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:03.709 pt2 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.709 "name": "raid_bdev1", 00:14:03.709 "uuid": "eb671543-749b-47bf-9b78-d5ddf1818168", 00:14:03.709 "strip_size_kb": 0, 00:14:03.709 "state": "configuring", 00:14:03.709 "raid_level": "raid1", 00:14:03.709 "superblock": true, 00:14:03.709 "num_base_bdevs": 4, 00:14:03.709 "num_base_bdevs_discovered": 1, 00:14:03.709 "num_base_bdevs_operational": 3, 00:14:03.709 "base_bdevs_list": [ 00:14:03.709 { 00:14:03.709 "name": null, 00:14:03.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.709 "is_configured": false, 00:14:03.709 "data_offset": 2048, 00:14:03.709 "data_size": 63488 00:14:03.709 }, 00:14:03.709 { 00:14:03.709 "name": "pt2", 00:14:03.709 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:03.709 "is_configured": true, 00:14:03.709 "data_offset": 2048, 00:14:03.709 "data_size": 63488 00:14:03.709 }, 00:14:03.709 { 00:14:03.709 "name": null, 00:14:03.709 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:03.709 "is_configured": false, 00:14:03.709 "data_offset": 2048, 00:14:03.709 "data_size": 63488 00:14:03.709 }, 00:14:03.709 { 00:14:03.709 "name": null, 00:14:03.709 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:03.709 "is_configured": false, 00:14:03.709 "data_offset": 2048, 00:14:03.709 "data_size": 63488 00:14:03.709 } 00:14:03.709 ] 00:14:03.709 }' 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.709 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.279 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:04.279 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:04.279 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:04.279 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.279 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.279 [2024-11-27 04:31:00.599838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:04.279 [2024-11-27 04:31:00.599995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.279 [2024-11-27 04:31:00.600048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:04.279 [2024-11-27 04:31:00.600097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.279 [2024-11-27 04:31:00.600750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.280 [2024-11-27 04:31:00.600826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:04.280 [2024-11-27 04:31:00.600995] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:04.280 [2024-11-27 04:31:00.601056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:04.280 pt3 00:14:04.280 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.280 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:04.280 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.280 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.280 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.280 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.280 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.280 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.280 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.280 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.280 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.280 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.280 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.280 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.280 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.280 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.280 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.280 "name": "raid_bdev1", 00:14:04.280 "uuid": "eb671543-749b-47bf-9b78-d5ddf1818168", 00:14:04.280 "strip_size_kb": 0, 00:14:04.280 "state": "configuring", 00:14:04.280 "raid_level": "raid1", 00:14:04.280 "superblock": true, 00:14:04.280 "num_base_bdevs": 4, 00:14:04.280 "num_base_bdevs_discovered": 2, 00:14:04.280 "num_base_bdevs_operational": 3, 00:14:04.280 "base_bdevs_list": [ 00:14:04.280 { 00:14:04.280 "name": null, 00:14:04.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.280 "is_configured": false, 00:14:04.280 "data_offset": 2048, 00:14:04.280 "data_size": 63488 00:14:04.280 }, 00:14:04.280 { 00:14:04.280 "name": "pt2", 00:14:04.280 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:04.280 "is_configured": true, 00:14:04.280 "data_offset": 2048, 00:14:04.280 "data_size": 63488 00:14:04.280 }, 00:14:04.280 { 00:14:04.280 "name": "pt3", 00:14:04.280 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:04.280 "is_configured": true, 00:14:04.280 "data_offset": 2048, 00:14:04.280 "data_size": 63488 00:14:04.280 }, 00:14:04.280 { 00:14:04.280 "name": null, 00:14:04.280 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:04.280 "is_configured": false, 00:14:04.280 "data_offset": 2048, 00:14:04.280 "data_size": 63488 00:14:04.280 } 00:14:04.280 ] 00:14:04.280 }' 00:14:04.280 04:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.280 04:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.539 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:04.540 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:04.540 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:14:04.540 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:04.540 04:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.540 04:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.540 [2024-11-27 04:31:01.087205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:04.540 [2024-11-27 04:31:01.087382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.540 [2024-11-27 04:31:01.087452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:04.540 [2024-11-27 04:31:01.087495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.540 [2024-11-27 04:31:01.088146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.540 [2024-11-27 04:31:01.088217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:04.540 [2024-11-27 04:31:01.088384] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:04.540 [2024-11-27 04:31:01.088452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:04.540 [2024-11-27 04:31:01.088669] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:04.540 [2024-11-27 04:31:01.088714] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:04.540 [2024-11-27 04:31:01.089062] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:04.540 [2024-11-27 04:31:01.089332] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:04.540 [2024-11-27 04:31:01.089389] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:04.540 [2024-11-27 04:31:01.089634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.540 pt4 00:14:04.540 04:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.540 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:04.540 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.540 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.540 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.540 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.540 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.540 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.540 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.540 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.540 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.540 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.540 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.540 04:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.540 04:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.540 04:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.799 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.799 "name": "raid_bdev1", 00:14:04.799 "uuid": "eb671543-749b-47bf-9b78-d5ddf1818168", 00:14:04.799 "strip_size_kb": 0, 00:14:04.799 "state": "online", 00:14:04.799 "raid_level": "raid1", 00:14:04.799 "superblock": true, 00:14:04.799 "num_base_bdevs": 4, 00:14:04.799 "num_base_bdevs_discovered": 3, 00:14:04.799 "num_base_bdevs_operational": 3, 00:14:04.799 "base_bdevs_list": [ 00:14:04.799 { 00:14:04.799 "name": null, 00:14:04.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.799 "is_configured": false, 00:14:04.799 "data_offset": 2048, 00:14:04.799 "data_size": 63488 00:14:04.799 }, 00:14:04.799 { 00:14:04.799 "name": "pt2", 00:14:04.799 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:04.799 "is_configured": true, 00:14:04.799 "data_offset": 2048, 00:14:04.799 "data_size": 63488 00:14:04.799 }, 00:14:04.799 { 00:14:04.799 "name": "pt3", 00:14:04.799 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:04.799 "is_configured": true, 00:14:04.799 "data_offset": 2048, 00:14:04.799 "data_size": 63488 00:14:04.799 }, 00:14:04.799 { 00:14:04.799 "name": "pt4", 00:14:04.799 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:04.799 "is_configured": true, 00:14:04.799 "data_offset": 2048, 00:14:04.799 "data_size": 63488 00:14:04.799 } 00:14:04.799 ] 00:14:04.799 }' 00:14:04.799 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.799 04:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.058 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:05.058 04:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.058 04:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.058 [2024-11-27 04:31:01.578278] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:05.058 [2024-11-27 04:31:01.578401] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:05.058 [2024-11-27 04:31:01.578527] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:05.058 [2024-11-27 04:31:01.578625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:05.058 [2024-11-27 04:31:01.578642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:05.058 04:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.058 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:05.058 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.058 04:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.058 04:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.058 04:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.058 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:05.058 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:05.058 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:14:05.058 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:14:05.058 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:14:05.058 04:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.059 04:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.059 04:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.059 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:05.059 04:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.059 04:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.318 [2024-11-27 04:31:01.642195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:05.318 [2024-11-27 04:31:01.642311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.318 [2024-11-27 04:31:01.642336] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:14:05.318 [2024-11-27 04:31:01.642354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.318 [2024-11-27 04:31:01.645396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.318 [2024-11-27 04:31:01.645445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:05.318 [2024-11-27 04:31:01.645566] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:05.318 [2024-11-27 04:31:01.645641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:05.318 [2024-11-27 04:31:01.645819] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:05.318 [2024-11-27 04:31:01.645837] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:05.318 [2024-11-27 04:31:01.645855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:05.318 [2024-11-27 04:31:01.645934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:05.318 [2024-11-27 04:31:01.646068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:05.318 pt1 00:14:05.318 04:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.318 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:14:05.318 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:05.318 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.318 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.318 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.318 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.318 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.318 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.318 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.318 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.318 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.318 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.318 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.319 04:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.319 04:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.319 04:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.319 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.319 "name": "raid_bdev1", 00:14:05.319 "uuid": "eb671543-749b-47bf-9b78-d5ddf1818168", 00:14:05.319 "strip_size_kb": 0, 00:14:05.319 "state": "configuring", 00:14:05.319 "raid_level": "raid1", 00:14:05.319 "superblock": true, 00:14:05.319 "num_base_bdevs": 4, 00:14:05.319 "num_base_bdevs_discovered": 2, 00:14:05.319 "num_base_bdevs_operational": 3, 00:14:05.319 "base_bdevs_list": [ 00:14:05.319 { 00:14:05.319 "name": null, 00:14:05.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.319 "is_configured": false, 00:14:05.319 "data_offset": 2048, 00:14:05.319 "data_size": 63488 00:14:05.319 }, 00:14:05.319 { 00:14:05.319 "name": "pt2", 00:14:05.319 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:05.319 "is_configured": true, 00:14:05.319 "data_offset": 2048, 00:14:05.319 "data_size": 63488 00:14:05.319 }, 00:14:05.319 { 00:14:05.319 "name": "pt3", 00:14:05.319 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:05.319 "is_configured": true, 00:14:05.319 "data_offset": 2048, 00:14:05.319 "data_size": 63488 00:14:05.319 }, 00:14:05.319 { 00:14:05.319 "name": null, 00:14:05.319 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:05.319 "is_configured": false, 00:14:05.319 "data_offset": 2048, 00:14:05.319 "data_size": 63488 00:14:05.319 } 00:14:05.319 ] 00:14:05.319 }' 00:14:05.319 04:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.319 04:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.580 04:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:05.580 04:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:05.580 04:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.580 04:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.580 04:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.580 04:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:05.580 04:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:05.580 04:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.580 04:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.580 [2024-11-27 04:31:02.129393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:05.580 [2024-11-27 04:31:02.129545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.580 [2024-11-27 04:31:02.129596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:05.580 [2024-11-27 04:31:02.129630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.580 [2024-11-27 04:31:02.130296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.580 [2024-11-27 04:31:02.130376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:05.580 [2024-11-27 04:31:02.130530] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:05.580 [2024-11-27 04:31:02.130593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:05.580 [2024-11-27 04:31:02.130779] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:05.580 [2024-11-27 04:31:02.130821] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:05.580 [2024-11-27 04:31:02.131189] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:05.580 [2024-11-27 04:31:02.131417] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:05.580 [2024-11-27 04:31:02.131494] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:05.580 [2024-11-27 04:31:02.131734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.580 pt4 00:14:05.580 04:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.580 04:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:05.580 04:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.580 04:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.580 04:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.580 04:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.580 04:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.580 04:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.580 04:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.580 04:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.580 04:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.580 04:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.580 04:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.580 04:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.580 04:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.580 04:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.838 04:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.838 "name": "raid_bdev1", 00:14:05.838 "uuid": "eb671543-749b-47bf-9b78-d5ddf1818168", 00:14:05.838 "strip_size_kb": 0, 00:14:05.838 "state": "online", 00:14:05.838 "raid_level": "raid1", 00:14:05.838 "superblock": true, 00:14:05.838 "num_base_bdevs": 4, 00:14:05.838 "num_base_bdevs_discovered": 3, 00:14:05.838 "num_base_bdevs_operational": 3, 00:14:05.838 "base_bdevs_list": [ 00:14:05.838 { 00:14:05.838 "name": null, 00:14:05.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.838 "is_configured": false, 00:14:05.838 "data_offset": 2048, 00:14:05.838 "data_size": 63488 00:14:05.838 }, 00:14:05.838 { 00:14:05.838 "name": "pt2", 00:14:05.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:05.838 "is_configured": true, 00:14:05.838 "data_offset": 2048, 00:14:05.838 "data_size": 63488 00:14:05.838 }, 00:14:05.838 { 00:14:05.838 "name": "pt3", 00:14:05.838 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:05.838 "is_configured": true, 00:14:05.838 "data_offset": 2048, 00:14:05.838 "data_size": 63488 00:14:05.838 }, 00:14:05.838 { 00:14:05.838 "name": "pt4", 00:14:05.838 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:05.838 "is_configured": true, 00:14:05.838 "data_offset": 2048, 00:14:05.838 "data_size": 63488 00:14:05.838 } 00:14:05.838 ] 00:14:05.838 }' 00:14:05.838 04:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.838 04:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.096 04:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:06.096 04:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:06.096 04:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.096 04:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.096 04:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.096 04:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:06.096 04:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:06.096 04:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:06.096 04:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.096 04:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.096 [2024-11-27 04:31:02.648918] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.096 04:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.355 04:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' eb671543-749b-47bf-9b78-d5ddf1818168 '!=' eb671543-749b-47bf-9b78-d5ddf1818168 ']' 00:14:06.355 04:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74828 00:14:06.355 04:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74828 ']' 00:14:06.355 04:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74828 00:14:06.355 04:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:06.355 04:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.355 04:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74828 00:14:06.355 04:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:06.355 killing process with pid 74828 00:14:06.355 04:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:06.355 04:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74828' 00:14:06.355 04:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74828 00:14:06.355 [2024-11-27 04:31:02.727643] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:06.355 04:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74828 00:14:06.355 [2024-11-27 04:31:02.727790] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.355 [2024-11-27 04:31:02.727892] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.355 [2024-11-27 04:31:02.727908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:06.922 [2024-11-27 04:31:03.208909] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:08.298 04:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:08.298 ************************************ 00:14:08.298 END TEST raid_superblock_test 00:14:08.298 ************************************ 00:14:08.298 00:14:08.298 real 0m9.137s 00:14:08.298 user 0m14.030s 00:14:08.298 sys 0m1.797s 00:14:08.298 04:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.298 04:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.298 04:31:04 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:14:08.298 04:31:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:08.298 04:31:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.298 04:31:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:08.298 ************************************ 00:14:08.298 START TEST raid_read_error_test 00:14:08.298 ************************************ 00:14:08.298 04:31:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:14:08.298 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:08.298 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:08.298 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:08.298 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:08.298 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:08.298 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:08.298 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:08.298 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:08.298 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:08.298 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:08.298 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:08.298 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:08.298 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:08.298 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:08.298 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:08.299 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:08.299 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:08.299 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:08.299 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:08.299 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:08.299 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:08.299 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:08.299 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:08.299 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:08.299 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:08.299 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:08.299 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:08.299 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LVEFGphXkL 00:14:08.299 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75321 00:14:08.299 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:08.299 04:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75321 00:14:08.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.299 04:31:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75321 ']' 00:14:08.299 04:31:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.299 04:31:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:08.299 04:31:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.299 04:31:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:08.299 04:31:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.299 [2024-11-27 04:31:04.744714] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:14:08.299 [2024-11-27 04:31:04.745002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75321 ] 00:14:08.559 [2024-11-27 04:31:04.927408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.559 [2024-11-27 04:31:05.077469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.818 [2024-11-27 04:31:05.344444] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:08.818 [2024-11-27 04:31:05.344553] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.076 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:09.076 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:09.076 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:09.076 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:09.076 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.076 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.334 BaseBdev1_malloc 00:14:09.334 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.334 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:09.334 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.334 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.334 true 00:14:09.334 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.334 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:09.334 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.334 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.334 [2024-11-27 04:31:05.718044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:09.334 [2024-11-27 04:31:05.718218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.334 [2024-11-27 04:31:05.718291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:09.334 [2024-11-27 04:31:05.718348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.334 [2024-11-27 04:31:05.721358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.334 [2024-11-27 04:31:05.721449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:09.334 BaseBdev1 00:14:09.334 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.334 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:09.334 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:09.334 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.334 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.334 BaseBdev2_malloc 00:14:09.334 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.334 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:09.334 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.334 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.334 true 00:14:09.334 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.334 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:09.334 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.334 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.334 [2024-11-27 04:31:05.798681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:09.334 [2024-11-27 04:31:05.798813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.335 [2024-11-27 04:31:05.798873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:09.335 [2024-11-27 04:31:05.798913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.335 [2024-11-27 04:31:05.801772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.335 [2024-11-27 04:31:05.801870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:09.335 BaseBdev2 00:14:09.335 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.335 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:09.335 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:09.335 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.335 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.335 BaseBdev3_malloc 00:14:09.335 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.335 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:09.335 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.335 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.335 true 00:14:09.335 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.335 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:09.335 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.335 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.335 [2024-11-27 04:31:05.894517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:09.335 [2024-11-27 04:31:05.894684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.335 [2024-11-27 04:31:05.894735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:09.335 [2024-11-27 04:31:05.894780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.335 [2024-11-27 04:31:05.897828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.335 [2024-11-27 04:31:05.897932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:09.335 BaseBdev3 00:14:09.335 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.335 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:09.335 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:09.335 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.335 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.594 BaseBdev4_malloc 00:14:09.594 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.594 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:09.594 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.594 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.594 true 00:14:09.594 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.594 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:09.594 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.594 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.594 [2024-11-27 04:31:05.976115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:09.594 [2024-11-27 04:31:05.976188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.594 [2024-11-27 04:31:05.976216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:09.594 [2024-11-27 04:31:05.976230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.594 [2024-11-27 04:31:05.979164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.594 [2024-11-27 04:31:05.979259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:09.594 BaseBdev4 00:14:09.594 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.594 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:09.594 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.594 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.594 [2024-11-27 04:31:05.988292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.594 [2024-11-27 04:31:05.990923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:09.594 [2024-11-27 04:31:05.991065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:09.594 [2024-11-27 04:31:05.991199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:09.594 [2024-11-27 04:31:05.991551] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:09.594 [2024-11-27 04:31:05.991613] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:09.594 [2024-11-27 04:31:05.991979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:09.594 [2024-11-27 04:31:05.992267] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:09.594 [2024-11-27 04:31:05.992317] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:09.594 [2024-11-27 04:31:05.992641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.594 04:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.594 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:09.594 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.594 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.594 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.594 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.594 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:09.594 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.594 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.594 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.594 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.594 04:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.594 04:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.594 04:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.594 04:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.594 04:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.594 04:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.594 "name": "raid_bdev1", 00:14:09.594 "uuid": "e080dada-63c7-44e1-9070-81964aa3eedd", 00:14:09.594 "strip_size_kb": 0, 00:14:09.594 "state": "online", 00:14:09.594 "raid_level": "raid1", 00:14:09.594 "superblock": true, 00:14:09.594 "num_base_bdevs": 4, 00:14:09.594 "num_base_bdevs_discovered": 4, 00:14:09.594 "num_base_bdevs_operational": 4, 00:14:09.594 "base_bdevs_list": [ 00:14:09.594 { 00:14:09.594 "name": "BaseBdev1", 00:14:09.594 "uuid": "5f6d0fb3-4271-54d5-811f-17d1bd00ba7f", 00:14:09.594 "is_configured": true, 00:14:09.594 "data_offset": 2048, 00:14:09.594 "data_size": 63488 00:14:09.594 }, 00:14:09.594 { 00:14:09.594 "name": "BaseBdev2", 00:14:09.594 "uuid": "7e1536ca-6295-5180-bb35-ff86378872b4", 00:14:09.594 "is_configured": true, 00:14:09.594 "data_offset": 2048, 00:14:09.594 "data_size": 63488 00:14:09.594 }, 00:14:09.594 { 00:14:09.594 "name": "BaseBdev3", 00:14:09.594 "uuid": "7e8b28b7-b74c-5cf3-b36e-4953ca71ff01", 00:14:09.594 "is_configured": true, 00:14:09.594 "data_offset": 2048, 00:14:09.594 "data_size": 63488 00:14:09.594 }, 00:14:09.594 { 00:14:09.594 "name": "BaseBdev4", 00:14:09.594 "uuid": "0ec88ed0-c091-5dd1-9b27-2f38c5b48415", 00:14:09.594 "is_configured": true, 00:14:09.594 "data_offset": 2048, 00:14:09.594 "data_size": 63488 00:14:09.594 } 00:14:09.594 ] 00:14:09.594 }' 00:14:09.594 04:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.594 04:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.852 04:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:09.852 04:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:10.110 [2024-11-27 04:31:06.525444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:11.047 04:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:11.047 04:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.047 04:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.047 04:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.047 04:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:11.047 04:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:11.047 04:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:14:11.047 04:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:11.047 04:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:11.047 04:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.047 04:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.047 04:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.047 04:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.047 04:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:11.047 04:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.047 04:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.047 04:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.047 04:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.047 04:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.047 04:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.048 04:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.048 04:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.048 04:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.048 04:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.048 "name": "raid_bdev1", 00:14:11.048 "uuid": "e080dada-63c7-44e1-9070-81964aa3eedd", 00:14:11.048 "strip_size_kb": 0, 00:14:11.048 "state": "online", 00:14:11.048 "raid_level": "raid1", 00:14:11.048 "superblock": true, 00:14:11.048 "num_base_bdevs": 4, 00:14:11.048 "num_base_bdevs_discovered": 4, 00:14:11.048 "num_base_bdevs_operational": 4, 00:14:11.048 "base_bdevs_list": [ 00:14:11.048 { 00:14:11.048 "name": "BaseBdev1", 00:14:11.048 "uuid": "5f6d0fb3-4271-54d5-811f-17d1bd00ba7f", 00:14:11.048 "is_configured": true, 00:14:11.048 "data_offset": 2048, 00:14:11.048 "data_size": 63488 00:14:11.048 }, 00:14:11.048 { 00:14:11.048 "name": "BaseBdev2", 00:14:11.048 "uuid": "7e1536ca-6295-5180-bb35-ff86378872b4", 00:14:11.048 "is_configured": true, 00:14:11.048 "data_offset": 2048, 00:14:11.048 "data_size": 63488 00:14:11.048 }, 00:14:11.048 { 00:14:11.048 "name": "BaseBdev3", 00:14:11.048 "uuid": "7e8b28b7-b74c-5cf3-b36e-4953ca71ff01", 00:14:11.048 "is_configured": true, 00:14:11.048 "data_offset": 2048, 00:14:11.048 "data_size": 63488 00:14:11.048 }, 00:14:11.048 { 00:14:11.048 "name": "BaseBdev4", 00:14:11.048 "uuid": "0ec88ed0-c091-5dd1-9b27-2f38c5b48415", 00:14:11.048 "is_configured": true, 00:14:11.048 "data_offset": 2048, 00:14:11.048 "data_size": 63488 00:14:11.048 } 00:14:11.048 ] 00:14:11.048 }' 00:14:11.048 04:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.048 04:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.616 04:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:11.616 04:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.616 04:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.616 [2024-11-27 04:31:07.939760] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:11.616 [2024-11-27 04:31:07.939886] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:11.616 [2024-11-27 04:31:07.943165] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:11.616 [2024-11-27 04:31:07.943286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.616 [2024-11-27 04:31:07.943481] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:11.616 [2024-11-27 04:31:07.943553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:11.616 { 00:14:11.616 "results": [ 00:14:11.616 { 00:14:11.616 "job": "raid_bdev1", 00:14:11.616 "core_mask": "0x1", 00:14:11.616 "workload": "randrw", 00:14:11.616 "percentage": 50, 00:14:11.616 "status": "finished", 00:14:11.616 "queue_depth": 1, 00:14:11.616 "io_size": 131072, 00:14:11.616 "runtime": 1.414859, 00:14:11.616 "iops": 6937.793801361125, 00:14:11.616 "mibps": 867.2242251701406, 00:14:11.616 "io_failed": 0, 00:14:11.616 "io_timeout": 0, 00:14:11.616 "avg_latency_us": 140.97123313510068, 00:14:11.616 "min_latency_us": 25.7117903930131, 00:14:11.616 "max_latency_us": 1652.709170305677 00:14:11.616 } 00:14:11.616 ], 00:14:11.616 "core_count": 1 00:14:11.616 } 00:14:11.616 04:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.616 04:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75321 00:14:11.616 04:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75321 ']' 00:14:11.616 04:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75321 00:14:11.616 04:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:11.616 04:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:11.616 04:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75321 00:14:11.616 04:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:11.616 04:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:11.616 04:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75321' 00:14:11.616 killing process with pid 75321 00:14:11.616 04:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75321 00:14:11.616 [2024-11-27 04:31:07.989163] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:11.616 04:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75321 00:14:11.876 [2024-11-27 04:31:08.396686] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:13.258 04:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LVEFGphXkL 00:14:13.258 04:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:13.258 04:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:13.258 04:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:13.258 04:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:13.258 04:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:13.258 04:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:13.258 04:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:13.258 ************************************ 00:14:13.258 END TEST raid_read_error_test 00:14:13.258 ************************************ 00:14:13.258 00:14:13.258 real 0m5.194s 00:14:13.258 user 0m5.965s 00:14:13.258 sys 0m0.766s 00:14:13.258 04:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:13.258 04:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.518 04:31:09 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:14:13.518 04:31:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:13.518 04:31:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:13.518 04:31:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:13.518 ************************************ 00:14:13.518 START TEST raid_write_error_test 00:14:13.518 ************************************ 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.RVXQR7bd3h 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75472 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75472 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75472 ']' 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:13.518 04:31:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.518 [2024-11-27 04:31:10.022468] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:14:13.518 [2024-11-27 04:31:10.022619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75472 ] 00:14:13.778 [2024-11-27 04:31:10.205708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.778 [2024-11-27 04:31:10.354739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.347 [2024-11-27 04:31:10.626386] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.347 [2024-11-27 04:31:10.626479] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.347 04:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.347 04:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:14.347 04:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:14.347 04:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:14.347 04:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.347 04:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.608 BaseBdev1_malloc 00:14:14.608 04:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.608 04:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:14.608 04:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.608 04:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.608 true 00:14:14.608 04:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.608 04:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:14.608 04:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.608 04:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.608 [2024-11-27 04:31:10.979734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:14.608 [2024-11-27 04:31:10.979859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.608 [2024-11-27 04:31:10.979893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:14.608 [2024-11-27 04:31:10.979907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.608 [2024-11-27 04:31:10.982536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.608 [2024-11-27 04:31:10.982578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:14.608 BaseBdev1 00:14:14.608 04:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.608 04:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:14.608 04:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:14.608 04:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.608 04:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.608 BaseBdev2_malloc 00:14:14.608 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.608 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:14.608 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.608 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.608 true 00:14:14.608 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.609 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:14.609 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.609 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.609 [2024-11-27 04:31:11.055648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:14.609 [2024-11-27 04:31:11.055719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.609 [2024-11-27 04:31:11.055745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:14.609 [2024-11-27 04:31:11.055759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.609 [2024-11-27 04:31:11.058443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.609 [2024-11-27 04:31:11.058486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:14.609 BaseBdev2 00:14:14.609 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.609 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:14.609 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:14.609 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.609 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.609 BaseBdev3_malloc 00:14:14.609 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.609 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:14.609 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.609 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.609 true 00:14:14.609 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.609 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:14.609 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.609 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.609 [2024-11-27 04:31:11.144130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:14.609 [2024-11-27 04:31:11.144200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.609 [2024-11-27 04:31:11.144229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:14.609 [2024-11-27 04:31:11.144243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.609 [2024-11-27 04:31:11.146801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.609 [2024-11-27 04:31:11.146907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:14.609 BaseBdev3 00:14:14.609 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.609 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:14.609 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:14.609 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.609 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.867 BaseBdev4_malloc 00:14:14.867 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.867 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:14.867 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.867 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.867 true 00:14:14.867 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.867 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:14.867 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.867 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.867 [2024-11-27 04:31:11.223470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:14.867 [2024-11-27 04:31:11.223529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.867 [2024-11-27 04:31:11.223554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:14.867 [2024-11-27 04:31:11.223566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.867 [2024-11-27 04:31:11.226088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.867 [2024-11-27 04:31:11.226158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:14.867 BaseBdev4 00:14:14.867 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.867 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:14.867 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.867 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.867 [2024-11-27 04:31:11.235597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:14.867 [2024-11-27 04:31:11.238131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:14.867 [2024-11-27 04:31:11.238227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:14.867 [2024-11-27 04:31:11.238305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:14.868 [2024-11-27 04:31:11.238597] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:14.868 [2024-11-27 04:31:11.238617] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:14.868 [2024-11-27 04:31:11.238967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:14.868 [2024-11-27 04:31:11.239227] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:14.868 [2024-11-27 04:31:11.239238] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:14.868 [2024-11-27 04:31:11.239564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.868 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.868 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:14.868 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.868 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.868 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.868 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.868 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:14.868 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.868 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.868 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.868 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.868 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.868 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.868 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.868 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.868 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.868 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.868 "name": "raid_bdev1", 00:14:14.868 "uuid": "6cebaf9b-7abe-45dc-8426-934a423a4a1b", 00:14:14.868 "strip_size_kb": 0, 00:14:14.868 "state": "online", 00:14:14.868 "raid_level": "raid1", 00:14:14.868 "superblock": true, 00:14:14.868 "num_base_bdevs": 4, 00:14:14.868 "num_base_bdevs_discovered": 4, 00:14:14.868 "num_base_bdevs_operational": 4, 00:14:14.868 "base_bdevs_list": [ 00:14:14.868 { 00:14:14.868 "name": "BaseBdev1", 00:14:14.868 "uuid": "b66e7f09-1d45-582f-a182-6cc2f4c1de59", 00:14:14.868 "is_configured": true, 00:14:14.868 "data_offset": 2048, 00:14:14.868 "data_size": 63488 00:14:14.868 }, 00:14:14.868 { 00:14:14.868 "name": "BaseBdev2", 00:14:14.868 "uuid": "22dba29a-06a9-51dd-b2cb-efabf538674d", 00:14:14.868 "is_configured": true, 00:14:14.868 "data_offset": 2048, 00:14:14.868 "data_size": 63488 00:14:14.868 }, 00:14:14.868 { 00:14:14.868 "name": "BaseBdev3", 00:14:14.868 "uuid": "cf31fe00-43a6-5140-bd89-dfe65d39f3e8", 00:14:14.868 "is_configured": true, 00:14:14.868 "data_offset": 2048, 00:14:14.868 "data_size": 63488 00:14:14.868 }, 00:14:14.868 { 00:14:14.868 "name": "BaseBdev4", 00:14:14.868 "uuid": "03ef93bb-84c6-51e5-945a-69bf0b861648", 00:14:14.868 "is_configured": true, 00:14:14.868 "data_offset": 2048, 00:14:14.868 "data_size": 63488 00:14:14.868 } 00:14:14.868 ] 00:14:14.868 }' 00:14:14.868 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.868 04:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.437 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:15.437 04:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:15.437 [2024-11-27 04:31:11.840438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:16.377 04:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:16.377 04:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.377 04:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.377 [2024-11-27 04:31:12.743713] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:14:16.377 [2024-11-27 04:31:12.743794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:16.377 [2024-11-27 04:31:12.744068] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:14:16.377 04:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.377 04:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:16.377 04:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:16.377 04:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:14:16.377 04:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:14:16.377 04:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:16.377 04:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.377 04:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.377 04:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.377 04:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.377 04:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:16.377 04:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.377 04:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.377 04:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.377 04:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.377 04:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.377 04:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.377 04:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.377 04:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.377 04:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.377 04:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.377 "name": "raid_bdev1", 00:14:16.377 "uuid": "6cebaf9b-7abe-45dc-8426-934a423a4a1b", 00:14:16.377 "strip_size_kb": 0, 00:14:16.377 "state": "online", 00:14:16.377 "raid_level": "raid1", 00:14:16.377 "superblock": true, 00:14:16.377 "num_base_bdevs": 4, 00:14:16.377 "num_base_bdevs_discovered": 3, 00:14:16.377 "num_base_bdevs_operational": 3, 00:14:16.377 "base_bdevs_list": [ 00:14:16.377 { 00:14:16.377 "name": null, 00:14:16.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.377 "is_configured": false, 00:14:16.377 "data_offset": 0, 00:14:16.377 "data_size": 63488 00:14:16.377 }, 00:14:16.377 { 00:14:16.377 "name": "BaseBdev2", 00:14:16.377 "uuid": "22dba29a-06a9-51dd-b2cb-efabf538674d", 00:14:16.377 "is_configured": true, 00:14:16.377 "data_offset": 2048, 00:14:16.377 "data_size": 63488 00:14:16.377 }, 00:14:16.377 { 00:14:16.377 "name": "BaseBdev3", 00:14:16.377 "uuid": "cf31fe00-43a6-5140-bd89-dfe65d39f3e8", 00:14:16.377 "is_configured": true, 00:14:16.377 "data_offset": 2048, 00:14:16.377 "data_size": 63488 00:14:16.377 }, 00:14:16.377 { 00:14:16.377 "name": "BaseBdev4", 00:14:16.377 "uuid": "03ef93bb-84c6-51e5-945a-69bf0b861648", 00:14:16.377 "is_configured": true, 00:14:16.377 "data_offset": 2048, 00:14:16.377 "data_size": 63488 00:14:16.377 } 00:14:16.377 ] 00:14:16.377 }' 00:14:16.377 04:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.377 04:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.638 04:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:16.638 04:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.638 04:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.638 [2024-11-27 04:31:13.174580] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:16.638 [2024-11-27 04:31:13.174701] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:16.638 [2024-11-27 04:31:13.178042] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:16.638 [2024-11-27 04:31:13.178096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.638 [2024-11-27 04:31:13.178349] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:16.638 [2024-11-27 04:31:13.178415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:16.638 04:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.638 { 00:14:16.638 "results": [ 00:14:16.638 { 00:14:16.638 "job": "raid_bdev1", 00:14:16.638 "core_mask": "0x1", 00:14:16.638 "workload": "randrw", 00:14:16.638 "percentage": 50, 00:14:16.638 "status": "finished", 00:14:16.638 "queue_depth": 1, 00:14:16.638 "io_size": 131072, 00:14:16.638 "runtime": 1.33476, 00:14:16.638 "iops": 7632.083670472594, 00:14:16.638 "mibps": 954.0104588090743, 00:14:16.638 "io_failed": 0, 00:14:16.638 "io_timeout": 0, 00:14:16.638 "avg_latency_us": 127.9851359490197, 00:14:16.638 "min_latency_us": 25.2646288209607, 00:14:16.638 "max_latency_us": 1659.8637554585152 00:14:16.638 } 00:14:16.638 ], 00:14:16.638 "core_count": 1 00:14:16.638 } 00:14:16.638 04:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75472 00:14:16.638 04:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75472 ']' 00:14:16.638 04:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75472 00:14:16.638 04:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:16.638 04:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:16.638 04:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75472 00:14:16.638 04:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:16.638 04:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:16.924 04:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75472' 00:14:16.924 killing process with pid 75472 00:14:16.924 04:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75472 00:14:16.924 [2024-11-27 04:31:13.223507] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:16.924 04:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75472 00:14:17.245 [2024-11-27 04:31:13.609689] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:18.623 04:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:18.624 04:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.RVXQR7bd3h 00:14:18.624 04:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:18.624 04:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:18.624 04:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:18.624 04:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:18.624 04:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:18.624 04:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:18.624 00:14:18.624 real 0m5.118s 00:14:18.624 user 0m5.902s 00:14:18.624 sys 0m0.750s 00:14:18.624 04:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:18.624 04:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.624 ************************************ 00:14:18.624 END TEST raid_write_error_test 00:14:18.624 ************************************ 00:14:18.624 04:31:15 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:14:18.624 04:31:15 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:18.624 04:31:15 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:14:18.624 04:31:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:18.624 04:31:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:18.624 04:31:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:18.624 ************************************ 00:14:18.624 START TEST raid_rebuild_test 00:14:18.624 ************************************ 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75627 00:14:18.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75627 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75627 ']' 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.624 04:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:18.624 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:18.624 Zero copy mechanism will not be used. 00:14:18.624 [2024-11-27 04:31:15.181296] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:14:18.624 [2024-11-27 04:31:15.181418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75627 ] 00:14:18.883 [2024-11-27 04:31:15.352620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.142 [2024-11-27 04:31:15.498650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.400 [2024-11-27 04:31:15.764769] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.400 [2024-11-27 04:31:15.764980] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.660 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:19.660 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:19.660 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.660 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:19.660 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.660 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.660 BaseBdev1_malloc 00:14:19.660 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.660 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:19.660 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.660 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.660 [2024-11-27 04:31:16.082168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:19.660 [2024-11-27 04:31:16.082387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.660 [2024-11-27 04:31:16.082444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:19.660 [2024-11-27 04:31:16.082469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.660 [2024-11-27 04:31:16.086166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.660 [2024-11-27 04:31:16.086239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:19.660 BaseBdev1 00:14:19.660 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.660 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.660 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:19.660 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.660 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.660 BaseBdev2_malloc 00:14:19.660 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.660 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:19.660 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.660 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.660 [2024-11-27 04:31:16.147993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:19.661 [2024-11-27 04:31:16.148179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.661 [2024-11-27 04:31:16.148242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:19.661 [2024-11-27 04:31:16.148293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.661 [2024-11-27 04:31:16.151275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.661 [2024-11-27 04:31:16.151362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:19.661 BaseBdev2 00:14:19.661 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.661 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:19.661 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.661 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.661 spare_malloc 00:14:19.661 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.661 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:19.661 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.661 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.661 spare_delay 00:14:19.661 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.661 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:19.661 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.661 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.661 [2024-11-27 04:31:16.234057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:19.661 [2024-11-27 04:31:16.234220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.661 [2024-11-27 04:31:16.234266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:19.661 [2024-11-27 04:31:16.234300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.661 [2024-11-27 04:31:16.236942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.661 [2024-11-27 04:31:16.237027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:19.661 spare 00:14:19.661 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.661 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:19.661 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.661 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.661 [2024-11-27 04:31:16.242096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.661 [2024-11-27 04:31:16.244462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:19.661 [2024-11-27 04:31:16.244616] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:19.661 [2024-11-27 04:31:16.244666] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:19.921 [2024-11-27 04:31:16.244987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:19.921 [2024-11-27 04:31:16.245236] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:19.921 [2024-11-27 04:31:16.245254] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:19.921 [2024-11-27 04:31:16.245446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.921 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.921 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:19.921 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.921 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.921 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.921 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.921 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:19.921 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.921 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.921 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.921 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.921 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.921 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.921 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.921 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.921 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.921 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.921 "name": "raid_bdev1", 00:14:19.921 "uuid": "6e1d8dff-c429-46d3-883d-5b3283427de0", 00:14:19.921 "strip_size_kb": 0, 00:14:19.921 "state": "online", 00:14:19.921 "raid_level": "raid1", 00:14:19.921 "superblock": false, 00:14:19.921 "num_base_bdevs": 2, 00:14:19.921 "num_base_bdevs_discovered": 2, 00:14:19.921 "num_base_bdevs_operational": 2, 00:14:19.921 "base_bdevs_list": [ 00:14:19.921 { 00:14:19.921 "name": "BaseBdev1", 00:14:19.921 "uuid": "99d47e39-504d-5873-b48b-29f481931ba1", 00:14:19.921 "is_configured": true, 00:14:19.921 "data_offset": 0, 00:14:19.921 "data_size": 65536 00:14:19.921 }, 00:14:19.921 { 00:14:19.921 "name": "BaseBdev2", 00:14:19.921 "uuid": "73e3793b-207b-54c3-b649-dc8175edbbee", 00:14:19.921 "is_configured": true, 00:14:19.921 "data_offset": 0, 00:14:19.921 "data_size": 65536 00:14:19.921 } 00:14:19.921 ] 00:14:19.921 }' 00:14:19.921 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.921 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.181 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:20.181 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:20.181 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.181 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.181 [2024-11-27 04:31:16.717686] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:20.181 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.181 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:20.181 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:20.181 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.181 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.181 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.439 04:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.439 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:20.439 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:20.439 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:20.439 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:20.439 04:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:20.439 04:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:20.439 04:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:20.439 04:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:20.439 04:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:20.439 04:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:20.439 04:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:20.439 04:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:20.439 04:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:20.439 04:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:20.439 [2024-11-27 04:31:17.005014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:20.698 /dev/nbd0 00:14:20.698 04:31:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:20.698 04:31:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:20.698 04:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:20.698 04:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:20.698 04:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:20.698 04:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:20.698 04:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:20.698 04:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:20.698 04:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:20.698 04:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:20.698 04:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:20.698 1+0 records in 00:14:20.698 1+0 records out 00:14:20.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000649089 s, 6.3 MB/s 00:14:20.698 04:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.698 04:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:20.698 04:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.698 04:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:20.698 04:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:20.698 04:31:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:20.698 04:31:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:20.698 04:31:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:20.698 04:31:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:20.698 04:31:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:24.896 65536+0 records in 00:14:24.896 65536+0 records out 00:14:24.896 33554432 bytes (34 MB, 32 MiB) copied, 4.28275 s, 7.8 MB/s 00:14:24.896 04:31:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:24.896 04:31:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:24.896 04:31:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:24.896 04:31:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:24.896 04:31:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:24.896 04:31:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:24.896 04:31:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:25.155 [2024-11-27 04:31:21.610748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.155 [2024-11-27 04:31:21.630853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.155 "name": "raid_bdev1", 00:14:25.155 "uuid": "6e1d8dff-c429-46d3-883d-5b3283427de0", 00:14:25.155 "strip_size_kb": 0, 00:14:25.155 "state": "online", 00:14:25.155 "raid_level": "raid1", 00:14:25.155 "superblock": false, 00:14:25.155 "num_base_bdevs": 2, 00:14:25.155 "num_base_bdevs_discovered": 1, 00:14:25.155 "num_base_bdevs_operational": 1, 00:14:25.155 "base_bdevs_list": [ 00:14:25.155 { 00:14:25.155 "name": null, 00:14:25.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.155 "is_configured": false, 00:14:25.155 "data_offset": 0, 00:14:25.155 "data_size": 65536 00:14:25.155 }, 00:14:25.155 { 00:14:25.155 "name": "BaseBdev2", 00:14:25.155 "uuid": "73e3793b-207b-54c3-b649-dc8175edbbee", 00:14:25.155 "is_configured": true, 00:14:25.155 "data_offset": 0, 00:14:25.155 "data_size": 65536 00:14:25.155 } 00:14:25.155 ] 00:14:25.155 }' 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.155 04:31:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.738 04:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:25.738 04:31:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.738 04:31:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.738 [2024-11-27 04:31:22.094063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:25.738 [2024-11-27 04:31:22.112558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:14:25.738 04:31:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.738 04:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:25.738 [2024-11-27 04:31:22.114585] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:26.676 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.676 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.676 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.676 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.676 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.676 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.676 04:31:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.676 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.676 04:31:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.676 04:31:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.676 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.676 "name": "raid_bdev1", 00:14:26.676 "uuid": "6e1d8dff-c429-46d3-883d-5b3283427de0", 00:14:26.676 "strip_size_kb": 0, 00:14:26.676 "state": "online", 00:14:26.676 "raid_level": "raid1", 00:14:26.676 "superblock": false, 00:14:26.676 "num_base_bdevs": 2, 00:14:26.676 "num_base_bdevs_discovered": 2, 00:14:26.676 "num_base_bdevs_operational": 2, 00:14:26.676 "process": { 00:14:26.676 "type": "rebuild", 00:14:26.676 "target": "spare", 00:14:26.676 "progress": { 00:14:26.676 "blocks": 20480, 00:14:26.676 "percent": 31 00:14:26.676 } 00:14:26.676 }, 00:14:26.676 "base_bdevs_list": [ 00:14:26.676 { 00:14:26.676 "name": "spare", 00:14:26.676 "uuid": "54cb0394-b1b8-55ff-82af-ea0647c41aeb", 00:14:26.676 "is_configured": true, 00:14:26.676 "data_offset": 0, 00:14:26.676 "data_size": 65536 00:14:26.676 }, 00:14:26.676 { 00:14:26.676 "name": "BaseBdev2", 00:14:26.676 "uuid": "73e3793b-207b-54c3-b649-dc8175edbbee", 00:14:26.676 "is_configured": true, 00:14:26.676 "data_offset": 0, 00:14:26.676 "data_size": 65536 00:14:26.676 } 00:14:26.676 ] 00:14:26.676 }' 00:14:26.676 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.676 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.676 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.937 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.937 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:26.937 04:31:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.937 04:31:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.937 [2024-11-27 04:31:23.278199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.937 [2024-11-27 04:31:23.320611] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:26.937 [2024-11-27 04:31:23.320699] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.937 [2024-11-27 04:31:23.320715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.937 [2024-11-27 04:31:23.320727] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:26.937 04:31:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.937 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:26.937 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.937 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.937 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.937 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.937 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:26.937 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.937 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.937 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.937 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.937 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.937 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.937 04:31:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.937 04:31:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.937 04:31:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.937 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.937 "name": "raid_bdev1", 00:14:26.937 "uuid": "6e1d8dff-c429-46d3-883d-5b3283427de0", 00:14:26.937 "strip_size_kb": 0, 00:14:26.937 "state": "online", 00:14:26.937 "raid_level": "raid1", 00:14:26.937 "superblock": false, 00:14:26.937 "num_base_bdevs": 2, 00:14:26.937 "num_base_bdevs_discovered": 1, 00:14:26.937 "num_base_bdevs_operational": 1, 00:14:26.937 "base_bdevs_list": [ 00:14:26.937 { 00:14:26.937 "name": null, 00:14:26.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.937 "is_configured": false, 00:14:26.937 "data_offset": 0, 00:14:26.937 "data_size": 65536 00:14:26.937 }, 00:14:26.937 { 00:14:26.937 "name": "BaseBdev2", 00:14:26.937 "uuid": "73e3793b-207b-54c3-b649-dc8175edbbee", 00:14:26.937 "is_configured": true, 00:14:26.937 "data_offset": 0, 00:14:26.937 "data_size": 65536 00:14:26.937 } 00:14:26.937 ] 00:14:26.937 }' 00:14:26.937 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.937 04:31:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.507 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:27.507 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.507 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:27.507 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:27.507 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.507 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.507 04:31:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.507 04:31:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.507 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.507 04:31:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.507 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.507 "name": "raid_bdev1", 00:14:27.507 "uuid": "6e1d8dff-c429-46d3-883d-5b3283427de0", 00:14:27.507 "strip_size_kb": 0, 00:14:27.507 "state": "online", 00:14:27.507 "raid_level": "raid1", 00:14:27.507 "superblock": false, 00:14:27.507 "num_base_bdevs": 2, 00:14:27.507 "num_base_bdevs_discovered": 1, 00:14:27.507 "num_base_bdevs_operational": 1, 00:14:27.507 "base_bdevs_list": [ 00:14:27.507 { 00:14:27.507 "name": null, 00:14:27.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.507 "is_configured": false, 00:14:27.507 "data_offset": 0, 00:14:27.507 "data_size": 65536 00:14:27.507 }, 00:14:27.507 { 00:14:27.507 "name": "BaseBdev2", 00:14:27.507 "uuid": "73e3793b-207b-54c3-b649-dc8175edbbee", 00:14:27.507 "is_configured": true, 00:14:27.507 "data_offset": 0, 00:14:27.507 "data_size": 65536 00:14:27.507 } 00:14:27.507 ] 00:14:27.507 }' 00:14:27.507 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.507 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:27.507 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.507 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:27.507 04:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:27.507 04:31:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.507 04:31:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.507 [2024-11-27 04:31:23.992213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:27.507 [2024-11-27 04:31:24.008925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:14:27.507 04:31:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.507 04:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:27.507 [2024-11-27 04:31:24.010815] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:28.447 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.447 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.447 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.447 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.447 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.447 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.447 04:31:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.447 04:31:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.447 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.707 04:31:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.707 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.707 "name": "raid_bdev1", 00:14:28.707 "uuid": "6e1d8dff-c429-46d3-883d-5b3283427de0", 00:14:28.707 "strip_size_kb": 0, 00:14:28.707 "state": "online", 00:14:28.707 "raid_level": "raid1", 00:14:28.707 "superblock": false, 00:14:28.707 "num_base_bdevs": 2, 00:14:28.707 "num_base_bdevs_discovered": 2, 00:14:28.707 "num_base_bdevs_operational": 2, 00:14:28.707 "process": { 00:14:28.707 "type": "rebuild", 00:14:28.707 "target": "spare", 00:14:28.707 "progress": { 00:14:28.707 "blocks": 20480, 00:14:28.707 "percent": 31 00:14:28.707 } 00:14:28.707 }, 00:14:28.707 "base_bdevs_list": [ 00:14:28.707 { 00:14:28.707 "name": "spare", 00:14:28.707 "uuid": "54cb0394-b1b8-55ff-82af-ea0647c41aeb", 00:14:28.707 "is_configured": true, 00:14:28.707 "data_offset": 0, 00:14:28.707 "data_size": 65536 00:14:28.707 }, 00:14:28.707 { 00:14:28.707 "name": "BaseBdev2", 00:14:28.707 "uuid": "73e3793b-207b-54c3-b649-dc8175edbbee", 00:14:28.707 "is_configured": true, 00:14:28.707 "data_offset": 0, 00:14:28.707 "data_size": 65536 00:14:28.707 } 00:14:28.707 ] 00:14:28.707 }' 00:14:28.707 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.707 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.707 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.707 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.707 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:28.707 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:28.707 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:28.707 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:28.707 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=393 00:14:28.707 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:28.707 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.707 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.707 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.707 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.707 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.707 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.707 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.707 04:31:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.707 04:31:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.707 04:31:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.707 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.707 "name": "raid_bdev1", 00:14:28.707 "uuid": "6e1d8dff-c429-46d3-883d-5b3283427de0", 00:14:28.707 "strip_size_kb": 0, 00:14:28.707 "state": "online", 00:14:28.707 "raid_level": "raid1", 00:14:28.707 "superblock": false, 00:14:28.707 "num_base_bdevs": 2, 00:14:28.707 "num_base_bdevs_discovered": 2, 00:14:28.707 "num_base_bdevs_operational": 2, 00:14:28.707 "process": { 00:14:28.707 "type": "rebuild", 00:14:28.707 "target": "spare", 00:14:28.707 "progress": { 00:14:28.707 "blocks": 22528, 00:14:28.707 "percent": 34 00:14:28.707 } 00:14:28.707 }, 00:14:28.707 "base_bdevs_list": [ 00:14:28.707 { 00:14:28.707 "name": "spare", 00:14:28.707 "uuid": "54cb0394-b1b8-55ff-82af-ea0647c41aeb", 00:14:28.707 "is_configured": true, 00:14:28.707 "data_offset": 0, 00:14:28.707 "data_size": 65536 00:14:28.707 }, 00:14:28.707 { 00:14:28.707 "name": "BaseBdev2", 00:14:28.707 "uuid": "73e3793b-207b-54c3-b649-dc8175edbbee", 00:14:28.707 "is_configured": true, 00:14:28.707 "data_offset": 0, 00:14:28.707 "data_size": 65536 00:14:28.707 } 00:14:28.707 ] 00:14:28.707 }' 00:14:28.707 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.707 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.707 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.966 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.966 04:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:29.950 04:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:29.950 04:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.950 04:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.950 04:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.950 04:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.950 04:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.950 04:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.950 04:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.950 04:31:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.950 04:31:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.950 04:31:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.950 04:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.950 "name": "raid_bdev1", 00:14:29.950 "uuid": "6e1d8dff-c429-46d3-883d-5b3283427de0", 00:14:29.950 "strip_size_kb": 0, 00:14:29.950 "state": "online", 00:14:29.950 "raid_level": "raid1", 00:14:29.950 "superblock": false, 00:14:29.950 "num_base_bdevs": 2, 00:14:29.950 "num_base_bdevs_discovered": 2, 00:14:29.950 "num_base_bdevs_operational": 2, 00:14:29.950 "process": { 00:14:29.950 "type": "rebuild", 00:14:29.950 "target": "spare", 00:14:29.950 "progress": { 00:14:29.950 "blocks": 45056, 00:14:29.950 "percent": 68 00:14:29.950 } 00:14:29.950 }, 00:14:29.950 "base_bdevs_list": [ 00:14:29.950 { 00:14:29.950 "name": "spare", 00:14:29.950 "uuid": "54cb0394-b1b8-55ff-82af-ea0647c41aeb", 00:14:29.950 "is_configured": true, 00:14:29.950 "data_offset": 0, 00:14:29.950 "data_size": 65536 00:14:29.950 }, 00:14:29.950 { 00:14:29.950 "name": "BaseBdev2", 00:14:29.950 "uuid": "73e3793b-207b-54c3-b649-dc8175edbbee", 00:14:29.950 "is_configured": true, 00:14:29.950 "data_offset": 0, 00:14:29.950 "data_size": 65536 00:14:29.950 } 00:14:29.950 ] 00:14:29.950 }' 00:14:29.950 04:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.950 04:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.950 04:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.950 04:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.950 04:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:30.887 [2024-11-27 04:31:27.225863] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:30.887 [2024-11-27 04:31:27.225958] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:30.887 [2024-11-27 04:31:27.226007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.887 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:30.887 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.887 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.887 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.887 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.887 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.887 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.887 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.887 04:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.887 04:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.887 04:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.147 "name": "raid_bdev1", 00:14:31.147 "uuid": "6e1d8dff-c429-46d3-883d-5b3283427de0", 00:14:31.147 "strip_size_kb": 0, 00:14:31.147 "state": "online", 00:14:31.147 "raid_level": "raid1", 00:14:31.147 "superblock": false, 00:14:31.147 "num_base_bdevs": 2, 00:14:31.147 "num_base_bdevs_discovered": 2, 00:14:31.147 "num_base_bdevs_operational": 2, 00:14:31.147 "base_bdevs_list": [ 00:14:31.147 { 00:14:31.147 "name": "spare", 00:14:31.147 "uuid": "54cb0394-b1b8-55ff-82af-ea0647c41aeb", 00:14:31.147 "is_configured": true, 00:14:31.147 "data_offset": 0, 00:14:31.147 "data_size": 65536 00:14:31.147 }, 00:14:31.147 { 00:14:31.147 "name": "BaseBdev2", 00:14:31.147 "uuid": "73e3793b-207b-54c3-b649-dc8175edbbee", 00:14:31.147 "is_configured": true, 00:14:31.147 "data_offset": 0, 00:14:31.147 "data_size": 65536 00:14:31.147 } 00:14:31.147 ] 00:14:31.147 }' 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.147 "name": "raid_bdev1", 00:14:31.147 "uuid": "6e1d8dff-c429-46d3-883d-5b3283427de0", 00:14:31.147 "strip_size_kb": 0, 00:14:31.147 "state": "online", 00:14:31.147 "raid_level": "raid1", 00:14:31.147 "superblock": false, 00:14:31.147 "num_base_bdevs": 2, 00:14:31.147 "num_base_bdevs_discovered": 2, 00:14:31.147 "num_base_bdevs_operational": 2, 00:14:31.147 "base_bdevs_list": [ 00:14:31.147 { 00:14:31.147 "name": "spare", 00:14:31.147 "uuid": "54cb0394-b1b8-55ff-82af-ea0647c41aeb", 00:14:31.147 "is_configured": true, 00:14:31.147 "data_offset": 0, 00:14:31.147 "data_size": 65536 00:14:31.147 }, 00:14:31.147 { 00:14:31.147 "name": "BaseBdev2", 00:14:31.147 "uuid": "73e3793b-207b-54c3-b649-dc8175edbbee", 00:14:31.147 "is_configured": true, 00:14:31.147 "data_offset": 0, 00:14:31.147 "data_size": 65536 00:14:31.147 } 00:14:31.147 ] 00:14:31.147 }' 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.147 04:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.407 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.407 "name": "raid_bdev1", 00:14:31.407 "uuid": "6e1d8dff-c429-46d3-883d-5b3283427de0", 00:14:31.407 "strip_size_kb": 0, 00:14:31.407 "state": "online", 00:14:31.407 "raid_level": "raid1", 00:14:31.407 "superblock": false, 00:14:31.407 "num_base_bdevs": 2, 00:14:31.407 "num_base_bdevs_discovered": 2, 00:14:31.407 "num_base_bdevs_operational": 2, 00:14:31.407 "base_bdevs_list": [ 00:14:31.407 { 00:14:31.407 "name": "spare", 00:14:31.407 "uuid": "54cb0394-b1b8-55ff-82af-ea0647c41aeb", 00:14:31.407 "is_configured": true, 00:14:31.407 "data_offset": 0, 00:14:31.407 "data_size": 65536 00:14:31.407 }, 00:14:31.407 { 00:14:31.407 "name": "BaseBdev2", 00:14:31.407 "uuid": "73e3793b-207b-54c3-b649-dc8175edbbee", 00:14:31.407 "is_configured": true, 00:14:31.407 "data_offset": 0, 00:14:31.407 "data_size": 65536 00:14:31.407 } 00:14:31.407 ] 00:14:31.407 }' 00:14:31.407 04:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.407 04:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.667 04:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:31.667 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.667 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.667 [2024-11-27 04:31:28.176082] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.667 [2024-11-27 04:31:28.176144] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.667 [2024-11-27 04:31:28.176246] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.667 [2024-11-27 04:31:28.176321] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.667 [2024-11-27 04:31:28.176333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:31.667 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.667 04:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.667 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.667 04:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:31.667 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.667 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.667 04:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:31.667 04:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:31.667 04:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:31.667 04:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:31.667 04:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:31.667 04:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:31.667 04:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:31.667 04:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:31.667 04:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:31.667 04:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:31.667 04:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:31.667 04:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:31.667 04:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:31.926 /dev/nbd0 00:14:31.926 04:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:31.926 04:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:31.926 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:31.926 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:31.926 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:31.926 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:31.926 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:32.186 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:32.186 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:32.186 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:32.186 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:32.186 1+0 records in 00:14:32.186 1+0 records out 00:14:32.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324842 s, 12.6 MB/s 00:14:32.186 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.186 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:32.186 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.186 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:32.186 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:32.186 04:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:32.186 04:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:32.186 04:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:32.186 /dev/nbd1 00:14:32.186 04:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:32.186 04:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:32.186 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:32.186 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:32.186 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:32.186 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:32.186 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:32.444 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:32.444 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:32.445 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:32.445 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:32.445 1+0 records in 00:14:32.445 1+0 records out 00:14:32.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413889 s, 9.9 MB/s 00:14:32.445 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.445 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:32.445 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.445 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:32.445 04:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:32.445 04:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:32.445 04:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:32.445 04:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:32.445 04:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:32.445 04:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:32.445 04:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:32.445 04:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:32.445 04:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:32.445 04:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:32.445 04:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:32.703 04:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:32.703 04:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:32.703 04:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:32.703 04:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:32.703 04:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:32.703 04:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:32.703 04:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:32.703 04:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:32.703 04:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:32.703 04:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:32.963 04:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:32.963 04:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:32.963 04:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:32.963 04:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:32.963 04:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:32.963 04:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:32.963 04:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:32.963 04:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:32.963 04:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:32.963 04:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75627 00:14:32.963 04:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75627 ']' 00:14:32.963 04:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75627 00:14:32.963 04:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:32.963 04:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:32.963 04:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75627 00:14:32.963 killing process with pid 75627 00:14:32.963 Received shutdown signal, test time was about 60.000000 seconds 00:14:32.963 00:14:32.963 Latency(us) 00:14:32.963 [2024-11-27T04:31:29.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.963 [2024-11-27T04:31:29.550Z] =================================================================================================================== 00:14:32.963 [2024-11-27T04:31:29.550Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:32.963 04:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:32.963 04:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:32.963 04:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75627' 00:14:32.963 04:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75627 00:14:32.963 [2024-11-27 04:31:29.478404] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:32.963 04:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75627 00:14:33.222 [2024-11-27 04:31:29.797897] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:34.623 04:31:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:34.623 00:14:34.623 real 0m15.892s 00:14:34.623 user 0m18.080s 00:14:34.623 sys 0m3.316s 00:14:34.623 04:31:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:34.623 ************************************ 00:14:34.623 END TEST raid_rebuild_test 00:14:34.623 ************************************ 00:14:34.623 04:31:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.623 04:31:31 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:14:34.623 04:31:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:34.623 04:31:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.623 04:31:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:34.623 ************************************ 00:14:34.623 START TEST raid_rebuild_test_sb 00:14:34.623 ************************************ 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76046 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76046 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 76046 ']' 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:34.623 04:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.623 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:34.623 Zero copy mechanism will not be used. 00:14:34.623 [2024-11-27 04:31:31.153657] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:14:34.623 [2024-11-27 04:31:31.153777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76046 ] 00:14:34.882 [2024-11-27 04:31:31.310766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.882 [2024-11-27 04:31:31.425413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.140 [2024-11-27 04:31:31.636735] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.140 [2024-11-27 04:31:31.636799] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.722 04:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:35.722 04:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:35.722 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:35.722 04:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:35.722 04:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.722 04:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.722 BaseBdev1_malloc 00:14:35.722 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.722 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:35.722 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.722 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.722 [2024-11-27 04:31:32.043806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:35.722 [2024-11-27 04:31:32.043908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.722 [2024-11-27 04:31:32.043934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:35.722 [2024-11-27 04:31:32.043946] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.722 [2024-11-27 04:31:32.046022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.722 [2024-11-27 04:31:32.046063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:35.722 BaseBdev1 00:14:35.722 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.722 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:35.722 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:35.722 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.722 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.722 BaseBdev2_malloc 00:14:35.722 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.722 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:35.722 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.722 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.722 [2024-11-27 04:31:32.093481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:35.722 [2024-11-27 04:31:32.093621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.722 [2024-11-27 04:31:32.093670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:35.722 [2024-11-27 04:31:32.093718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.722 [2024-11-27 04:31:32.096211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.722 [2024-11-27 04:31:32.096312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:35.722 BaseBdev2 00:14:35.722 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.722 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:35.722 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.722 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.722 spare_malloc 00:14:35.722 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.722 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:35.722 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.723 spare_delay 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.723 [2024-11-27 04:31:32.174600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:35.723 [2024-11-27 04:31:32.174663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.723 [2024-11-27 04:31:32.174685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:35.723 [2024-11-27 04:31:32.174698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.723 [2024-11-27 04:31:32.177028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.723 spare 00:14:35.723 [2024-11-27 04:31:32.177144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.723 [2024-11-27 04:31:32.182649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:35.723 [2024-11-27 04:31:32.184625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:35.723 [2024-11-27 04:31:32.184810] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:35.723 [2024-11-27 04:31:32.184827] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:35.723 [2024-11-27 04:31:32.185111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:35.723 [2024-11-27 04:31:32.185291] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:35.723 [2024-11-27 04:31:32.185301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:35.723 [2024-11-27 04:31:32.185468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.723 "name": "raid_bdev1", 00:14:35.723 "uuid": "c4e34e76-da69-4905-94ab-e7680bfb0db2", 00:14:35.723 "strip_size_kb": 0, 00:14:35.723 "state": "online", 00:14:35.723 "raid_level": "raid1", 00:14:35.723 "superblock": true, 00:14:35.723 "num_base_bdevs": 2, 00:14:35.723 "num_base_bdevs_discovered": 2, 00:14:35.723 "num_base_bdevs_operational": 2, 00:14:35.723 "base_bdevs_list": [ 00:14:35.723 { 00:14:35.723 "name": "BaseBdev1", 00:14:35.723 "uuid": "48475c7b-4db6-555f-a5fc-32cee089631f", 00:14:35.723 "is_configured": true, 00:14:35.723 "data_offset": 2048, 00:14:35.723 "data_size": 63488 00:14:35.723 }, 00:14:35.723 { 00:14:35.723 "name": "BaseBdev2", 00:14:35.723 "uuid": "c2ddf1b5-c6c5-566c-a148-34549e0b9ed3", 00:14:35.723 "is_configured": true, 00:14:35.723 "data_offset": 2048, 00:14:35.723 "data_size": 63488 00:14:35.723 } 00:14:35.723 ] 00:14:35.723 }' 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.723 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.294 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:36.294 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.294 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:36.294 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.294 [2024-11-27 04:31:32.610285] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:36.294 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.294 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:36.294 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:36.294 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.294 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.294 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.294 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.294 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:36.294 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:36.294 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:36.294 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:36.294 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:36.294 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:36.294 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:36.294 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:36.294 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:36.294 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:36.294 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:36.294 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:36.294 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:36.294 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:36.554 [2024-11-27 04:31:32.889520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:36.554 /dev/nbd0 00:14:36.554 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:36.554 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:36.554 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:36.554 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:36.554 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:36.554 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:36.554 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:36.554 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:36.554 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:36.554 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:36.554 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.554 1+0 records in 00:14:36.554 1+0 records out 00:14:36.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417746 s, 9.8 MB/s 00:14:36.554 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.554 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:36.554 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.554 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:36.554 04:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:36.554 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:36.554 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:36.554 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:36.554 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:36.554 04:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:41.831 63488+0 records in 00:14:41.831 63488+0 records out 00:14:41.831 32505856 bytes (33 MB, 31 MiB) copied, 4.36503 s, 7.4 MB/s 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:41.831 [2024-11-27 04:31:37.544565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.831 [2024-11-27 04:31:37.576591] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.831 "name": "raid_bdev1", 00:14:41.831 "uuid": "c4e34e76-da69-4905-94ab-e7680bfb0db2", 00:14:41.831 "strip_size_kb": 0, 00:14:41.831 "state": "online", 00:14:41.831 "raid_level": "raid1", 00:14:41.831 "superblock": true, 00:14:41.831 "num_base_bdevs": 2, 00:14:41.831 "num_base_bdevs_discovered": 1, 00:14:41.831 "num_base_bdevs_operational": 1, 00:14:41.831 "base_bdevs_list": [ 00:14:41.831 { 00:14:41.831 "name": null, 00:14:41.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.831 "is_configured": false, 00:14:41.831 "data_offset": 0, 00:14:41.831 "data_size": 63488 00:14:41.831 }, 00:14:41.831 { 00:14:41.831 "name": "BaseBdev2", 00:14:41.831 "uuid": "c2ddf1b5-c6c5-566c-a148-34549e0b9ed3", 00:14:41.831 "is_configured": true, 00:14:41.831 "data_offset": 2048, 00:14:41.831 "data_size": 63488 00:14:41.831 } 00:14:41.831 ] 00:14:41.831 }' 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.831 04:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.831 04:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:41.831 04:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.831 04:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.831 [2024-11-27 04:31:38.043838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:41.831 [2024-11-27 04:31:38.061461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:14:41.831 04:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.831 [2024-11-27 04:31:38.063392] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:41.831 04:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.771 "name": "raid_bdev1", 00:14:42.771 "uuid": "c4e34e76-da69-4905-94ab-e7680bfb0db2", 00:14:42.771 "strip_size_kb": 0, 00:14:42.771 "state": "online", 00:14:42.771 "raid_level": "raid1", 00:14:42.771 "superblock": true, 00:14:42.771 "num_base_bdevs": 2, 00:14:42.771 "num_base_bdevs_discovered": 2, 00:14:42.771 "num_base_bdevs_operational": 2, 00:14:42.771 "process": { 00:14:42.771 "type": "rebuild", 00:14:42.771 "target": "spare", 00:14:42.771 "progress": { 00:14:42.771 "blocks": 20480, 00:14:42.771 "percent": 32 00:14:42.771 } 00:14:42.771 }, 00:14:42.771 "base_bdevs_list": [ 00:14:42.771 { 00:14:42.771 "name": "spare", 00:14:42.771 "uuid": "da3a9b8d-f03c-5de3-8237-9222a725fdc9", 00:14:42.771 "is_configured": true, 00:14:42.771 "data_offset": 2048, 00:14:42.771 "data_size": 63488 00:14:42.771 }, 00:14:42.771 { 00:14:42.771 "name": "BaseBdev2", 00:14:42.771 "uuid": "c2ddf1b5-c6c5-566c-a148-34549e0b9ed3", 00:14:42.771 "is_configured": true, 00:14:42.771 "data_offset": 2048, 00:14:42.771 "data_size": 63488 00:14:42.771 } 00:14:42.771 ] 00:14:42.771 }' 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.771 [2024-11-27 04:31:39.230845] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:42.771 [2024-11-27 04:31:39.269272] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:42.771 [2024-11-27 04:31:39.269472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.771 [2024-11-27 04:31:39.269511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:42.771 [2024-11-27 04:31:39.269539] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.771 04:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.031 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.031 "name": "raid_bdev1", 00:14:43.031 "uuid": "c4e34e76-da69-4905-94ab-e7680bfb0db2", 00:14:43.031 "strip_size_kb": 0, 00:14:43.031 "state": "online", 00:14:43.031 "raid_level": "raid1", 00:14:43.031 "superblock": true, 00:14:43.031 "num_base_bdevs": 2, 00:14:43.031 "num_base_bdevs_discovered": 1, 00:14:43.031 "num_base_bdevs_operational": 1, 00:14:43.031 "base_bdevs_list": [ 00:14:43.031 { 00:14:43.031 "name": null, 00:14:43.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.031 "is_configured": false, 00:14:43.031 "data_offset": 0, 00:14:43.031 "data_size": 63488 00:14:43.031 }, 00:14:43.031 { 00:14:43.031 "name": "BaseBdev2", 00:14:43.031 "uuid": "c2ddf1b5-c6c5-566c-a148-34549e0b9ed3", 00:14:43.031 "is_configured": true, 00:14:43.031 "data_offset": 2048, 00:14:43.031 "data_size": 63488 00:14:43.031 } 00:14:43.031 ] 00:14:43.031 }' 00:14:43.031 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.031 04:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.291 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:43.291 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.291 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:43.291 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:43.291 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.291 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.291 04:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.291 04:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.291 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.291 04:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.291 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.291 "name": "raid_bdev1", 00:14:43.291 "uuid": "c4e34e76-da69-4905-94ab-e7680bfb0db2", 00:14:43.291 "strip_size_kb": 0, 00:14:43.291 "state": "online", 00:14:43.291 "raid_level": "raid1", 00:14:43.291 "superblock": true, 00:14:43.291 "num_base_bdevs": 2, 00:14:43.291 "num_base_bdevs_discovered": 1, 00:14:43.291 "num_base_bdevs_operational": 1, 00:14:43.291 "base_bdevs_list": [ 00:14:43.291 { 00:14:43.291 "name": null, 00:14:43.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.291 "is_configured": false, 00:14:43.291 "data_offset": 0, 00:14:43.291 "data_size": 63488 00:14:43.291 }, 00:14:43.291 { 00:14:43.291 "name": "BaseBdev2", 00:14:43.291 "uuid": "c2ddf1b5-c6c5-566c-a148-34549e0b9ed3", 00:14:43.291 "is_configured": true, 00:14:43.291 "data_offset": 2048, 00:14:43.291 "data_size": 63488 00:14:43.291 } 00:14:43.291 ] 00:14:43.291 }' 00:14:43.291 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.550 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:43.550 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.550 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:43.550 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:43.550 04:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.550 04:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.550 [2024-11-27 04:31:39.959671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:43.550 [2024-11-27 04:31:39.978110] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:14:43.550 04:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.550 [2024-11-27 04:31:39.980022] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:43.550 04:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:44.487 04:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:44.487 04:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.487 04:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:44.487 04:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:44.487 04:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.487 04:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.487 04:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.487 04:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.487 04:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.487 04:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.487 04:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.487 "name": "raid_bdev1", 00:14:44.487 "uuid": "c4e34e76-da69-4905-94ab-e7680bfb0db2", 00:14:44.487 "strip_size_kb": 0, 00:14:44.488 "state": "online", 00:14:44.488 "raid_level": "raid1", 00:14:44.488 "superblock": true, 00:14:44.488 "num_base_bdevs": 2, 00:14:44.488 "num_base_bdevs_discovered": 2, 00:14:44.488 "num_base_bdevs_operational": 2, 00:14:44.488 "process": { 00:14:44.488 "type": "rebuild", 00:14:44.488 "target": "spare", 00:14:44.488 "progress": { 00:14:44.488 "blocks": 20480, 00:14:44.488 "percent": 32 00:14:44.488 } 00:14:44.488 }, 00:14:44.488 "base_bdevs_list": [ 00:14:44.488 { 00:14:44.488 "name": "spare", 00:14:44.488 "uuid": "da3a9b8d-f03c-5de3-8237-9222a725fdc9", 00:14:44.488 "is_configured": true, 00:14:44.488 "data_offset": 2048, 00:14:44.488 "data_size": 63488 00:14:44.488 }, 00:14:44.488 { 00:14:44.488 "name": "BaseBdev2", 00:14:44.488 "uuid": "c2ddf1b5-c6c5-566c-a148-34549e0b9ed3", 00:14:44.488 "is_configured": true, 00:14:44.488 "data_offset": 2048, 00:14:44.488 "data_size": 63488 00:14:44.488 } 00:14:44.488 ] 00:14:44.488 }' 00:14:44.488 04:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.747 04:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:44.747 04:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.747 04:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:44.747 04:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:44.747 04:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:44.747 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:44.747 04:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:44.747 04:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:44.747 04:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:44.747 04:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=409 00:14:44.747 04:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:44.747 04:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:44.747 04:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.747 04:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:44.747 04:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:44.747 04:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.747 04:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.747 04:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.747 04:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.747 04:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.747 04:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.747 04:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.747 "name": "raid_bdev1", 00:14:44.747 "uuid": "c4e34e76-da69-4905-94ab-e7680bfb0db2", 00:14:44.747 "strip_size_kb": 0, 00:14:44.747 "state": "online", 00:14:44.747 "raid_level": "raid1", 00:14:44.747 "superblock": true, 00:14:44.747 "num_base_bdevs": 2, 00:14:44.747 "num_base_bdevs_discovered": 2, 00:14:44.747 "num_base_bdevs_operational": 2, 00:14:44.747 "process": { 00:14:44.747 "type": "rebuild", 00:14:44.747 "target": "spare", 00:14:44.747 "progress": { 00:14:44.747 "blocks": 22528, 00:14:44.747 "percent": 35 00:14:44.747 } 00:14:44.747 }, 00:14:44.747 "base_bdevs_list": [ 00:14:44.747 { 00:14:44.747 "name": "spare", 00:14:44.747 "uuid": "da3a9b8d-f03c-5de3-8237-9222a725fdc9", 00:14:44.747 "is_configured": true, 00:14:44.747 "data_offset": 2048, 00:14:44.747 "data_size": 63488 00:14:44.747 }, 00:14:44.747 { 00:14:44.747 "name": "BaseBdev2", 00:14:44.747 "uuid": "c2ddf1b5-c6c5-566c-a148-34549e0b9ed3", 00:14:44.748 "is_configured": true, 00:14:44.748 "data_offset": 2048, 00:14:44.748 "data_size": 63488 00:14:44.748 } 00:14:44.748 ] 00:14:44.748 }' 00:14:44.748 04:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.748 04:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:44.748 04:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.748 04:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:44.748 04:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:46.127 04:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:46.127 04:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.127 04:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.127 04:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.127 04:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.127 04:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.127 04:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.127 04:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.127 04:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.127 04:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.127 04:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.127 04:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.127 "name": "raid_bdev1", 00:14:46.127 "uuid": "c4e34e76-da69-4905-94ab-e7680bfb0db2", 00:14:46.127 "strip_size_kb": 0, 00:14:46.127 "state": "online", 00:14:46.127 "raid_level": "raid1", 00:14:46.127 "superblock": true, 00:14:46.127 "num_base_bdevs": 2, 00:14:46.127 "num_base_bdevs_discovered": 2, 00:14:46.127 "num_base_bdevs_operational": 2, 00:14:46.127 "process": { 00:14:46.127 "type": "rebuild", 00:14:46.127 "target": "spare", 00:14:46.127 "progress": { 00:14:46.127 "blocks": 45056, 00:14:46.127 "percent": 70 00:14:46.127 } 00:14:46.127 }, 00:14:46.127 "base_bdevs_list": [ 00:14:46.127 { 00:14:46.127 "name": "spare", 00:14:46.127 "uuid": "da3a9b8d-f03c-5de3-8237-9222a725fdc9", 00:14:46.127 "is_configured": true, 00:14:46.127 "data_offset": 2048, 00:14:46.127 "data_size": 63488 00:14:46.127 }, 00:14:46.127 { 00:14:46.127 "name": "BaseBdev2", 00:14:46.127 "uuid": "c2ddf1b5-c6c5-566c-a148-34549e0b9ed3", 00:14:46.127 "is_configured": true, 00:14:46.127 "data_offset": 2048, 00:14:46.127 "data_size": 63488 00:14:46.127 } 00:14:46.127 ] 00:14:46.127 }' 00:14:46.127 04:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.127 04:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:46.128 04:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.128 04:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.128 04:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:46.695 [2024-11-27 04:31:43.095109] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:46.695 [2024-11-27 04:31:43.095202] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:46.695 [2024-11-27 04:31:43.095338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.953 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:46.953 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.953 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.953 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.953 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.953 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.953 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.953 04:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.953 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.953 04:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.953 04:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.953 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.953 "name": "raid_bdev1", 00:14:46.953 "uuid": "c4e34e76-da69-4905-94ab-e7680bfb0db2", 00:14:46.953 "strip_size_kb": 0, 00:14:46.953 "state": "online", 00:14:46.953 "raid_level": "raid1", 00:14:46.953 "superblock": true, 00:14:46.953 "num_base_bdevs": 2, 00:14:46.953 "num_base_bdevs_discovered": 2, 00:14:46.953 "num_base_bdevs_operational": 2, 00:14:46.953 "base_bdevs_list": [ 00:14:46.953 { 00:14:46.953 "name": "spare", 00:14:46.953 "uuid": "da3a9b8d-f03c-5de3-8237-9222a725fdc9", 00:14:46.953 "is_configured": true, 00:14:46.953 "data_offset": 2048, 00:14:46.953 "data_size": 63488 00:14:46.953 }, 00:14:46.953 { 00:14:46.953 "name": "BaseBdev2", 00:14:46.953 "uuid": "c2ddf1b5-c6c5-566c-a148-34549e0b9ed3", 00:14:46.953 "is_configured": true, 00:14:46.953 "data_offset": 2048, 00:14:46.953 "data_size": 63488 00:14:46.953 } 00:14:46.953 ] 00:14:46.953 }' 00:14:46.953 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.953 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:46.953 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.212 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:47.212 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:47.212 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:47.212 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.212 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:47.212 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:47.212 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.212 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.212 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.212 04:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.212 04:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.212 04:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.212 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.212 "name": "raid_bdev1", 00:14:47.212 "uuid": "c4e34e76-da69-4905-94ab-e7680bfb0db2", 00:14:47.212 "strip_size_kb": 0, 00:14:47.212 "state": "online", 00:14:47.212 "raid_level": "raid1", 00:14:47.212 "superblock": true, 00:14:47.212 "num_base_bdevs": 2, 00:14:47.212 "num_base_bdevs_discovered": 2, 00:14:47.212 "num_base_bdevs_operational": 2, 00:14:47.212 "base_bdevs_list": [ 00:14:47.212 { 00:14:47.212 "name": "spare", 00:14:47.212 "uuid": "da3a9b8d-f03c-5de3-8237-9222a725fdc9", 00:14:47.212 "is_configured": true, 00:14:47.212 "data_offset": 2048, 00:14:47.212 "data_size": 63488 00:14:47.212 }, 00:14:47.212 { 00:14:47.212 "name": "BaseBdev2", 00:14:47.212 "uuid": "c2ddf1b5-c6c5-566c-a148-34549e0b9ed3", 00:14:47.212 "is_configured": true, 00:14:47.212 "data_offset": 2048, 00:14:47.212 "data_size": 63488 00:14:47.213 } 00:14:47.213 ] 00:14:47.213 }' 00:14:47.213 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.213 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:47.213 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.213 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:47.213 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:47.213 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.213 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.213 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.213 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.213 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:47.213 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.213 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.213 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.213 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.213 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.213 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.213 04:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.213 04:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.213 04:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.213 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.213 "name": "raid_bdev1", 00:14:47.213 "uuid": "c4e34e76-da69-4905-94ab-e7680bfb0db2", 00:14:47.213 "strip_size_kb": 0, 00:14:47.213 "state": "online", 00:14:47.213 "raid_level": "raid1", 00:14:47.213 "superblock": true, 00:14:47.213 "num_base_bdevs": 2, 00:14:47.213 "num_base_bdevs_discovered": 2, 00:14:47.213 "num_base_bdevs_operational": 2, 00:14:47.213 "base_bdevs_list": [ 00:14:47.213 { 00:14:47.213 "name": "spare", 00:14:47.213 "uuid": "da3a9b8d-f03c-5de3-8237-9222a725fdc9", 00:14:47.213 "is_configured": true, 00:14:47.213 "data_offset": 2048, 00:14:47.213 "data_size": 63488 00:14:47.213 }, 00:14:47.213 { 00:14:47.213 "name": "BaseBdev2", 00:14:47.213 "uuid": "c2ddf1b5-c6c5-566c-a148-34549e0b9ed3", 00:14:47.213 "is_configured": true, 00:14:47.213 "data_offset": 2048, 00:14:47.213 "data_size": 63488 00:14:47.213 } 00:14:47.213 ] 00:14:47.213 }' 00:14:47.213 04:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.213 04:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.811 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:47.811 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.811 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.811 [2024-11-27 04:31:44.185493] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:47.811 [2024-11-27 04:31:44.185534] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:47.811 [2024-11-27 04:31:44.185623] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:47.811 [2024-11-27 04:31:44.185696] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:47.812 [2024-11-27 04:31:44.185708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:47.812 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.812 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:47.812 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.812 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.812 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.812 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.812 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:47.812 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:47.812 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:47.812 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:47.812 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:47.812 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:47.812 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:47.812 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:47.812 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:47.812 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:47.812 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:47.812 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:47.812 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:48.071 /dev/nbd0 00:14:48.071 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:48.071 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:48.071 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:48.071 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:48.071 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:48.071 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:48.071 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:48.071 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:48.071 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:48.071 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:48.071 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:48.071 1+0 records in 00:14:48.071 1+0 records out 00:14:48.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388123 s, 10.6 MB/s 00:14:48.071 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.071 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:48.071 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.071 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:48.071 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:48.072 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:48.072 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:48.072 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:48.331 /dev/nbd1 00:14:48.331 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:48.331 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:48.331 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:48.331 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:48.331 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:48.331 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:48.331 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:48.331 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:48.331 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:48.331 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:48.331 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:48.331 1+0 records in 00:14:48.331 1+0 records out 00:14:48.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320408 s, 12.8 MB/s 00:14:48.331 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.331 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:48.331 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.331 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:48.331 04:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:48.331 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:48.331 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:48.331 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:48.588 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:48.588 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:48.588 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:48.588 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:48.588 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:48.588 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:48.588 04:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:48.847 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:48.847 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:48.847 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:48.847 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:48.847 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:48.847 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:48.847 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:48.847 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:48.847 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:48.847 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:48.847 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:48.847 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:48.847 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:48.847 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:48.847 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:48.847 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:48.847 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:48.847 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:48.847 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:48.847 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:48.847 04:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.847 04:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.107 [2024-11-27 04:31:45.444674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:49.107 [2024-11-27 04:31:45.444734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.107 [2024-11-27 04:31:45.444759] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:49.107 [2024-11-27 04:31:45.444767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.107 [2024-11-27 04:31:45.447019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.107 [2024-11-27 04:31:45.447059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:49.107 [2024-11-27 04:31:45.447165] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:49.107 [2024-11-27 04:31:45.447249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:49.107 [2024-11-27 04:31:45.447389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:49.107 spare 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.107 [2024-11-27 04:31:45.547326] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:49.107 [2024-11-27 04:31:45.547383] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:49.107 [2024-11-27 04:31:45.547757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:14:49.107 [2024-11-27 04:31:45.548011] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:49.107 [2024-11-27 04:31:45.548028] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:49.107 [2024-11-27 04:31:45.548243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.107 "name": "raid_bdev1", 00:14:49.107 "uuid": "c4e34e76-da69-4905-94ab-e7680bfb0db2", 00:14:49.107 "strip_size_kb": 0, 00:14:49.107 "state": "online", 00:14:49.107 "raid_level": "raid1", 00:14:49.107 "superblock": true, 00:14:49.107 "num_base_bdevs": 2, 00:14:49.107 "num_base_bdevs_discovered": 2, 00:14:49.107 "num_base_bdevs_operational": 2, 00:14:49.107 "base_bdevs_list": [ 00:14:49.107 { 00:14:49.107 "name": "spare", 00:14:49.107 "uuid": "da3a9b8d-f03c-5de3-8237-9222a725fdc9", 00:14:49.107 "is_configured": true, 00:14:49.107 "data_offset": 2048, 00:14:49.107 "data_size": 63488 00:14:49.107 }, 00:14:49.107 { 00:14:49.107 "name": "BaseBdev2", 00:14:49.107 "uuid": "c2ddf1b5-c6c5-566c-a148-34549e0b9ed3", 00:14:49.107 "is_configured": true, 00:14:49.107 "data_offset": 2048, 00:14:49.107 "data_size": 63488 00:14:49.107 } 00:14:49.107 ] 00:14:49.107 }' 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.107 04:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.677 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:49.677 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.677 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:49.677 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:49.677 04:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.677 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.677 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.677 04:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.677 04:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.677 04:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.677 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.677 "name": "raid_bdev1", 00:14:49.677 "uuid": "c4e34e76-da69-4905-94ab-e7680bfb0db2", 00:14:49.677 "strip_size_kb": 0, 00:14:49.677 "state": "online", 00:14:49.677 "raid_level": "raid1", 00:14:49.677 "superblock": true, 00:14:49.677 "num_base_bdevs": 2, 00:14:49.677 "num_base_bdevs_discovered": 2, 00:14:49.677 "num_base_bdevs_operational": 2, 00:14:49.677 "base_bdevs_list": [ 00:14:49.677 { 00:14:49.677 "name": "spare", 00:14:49.677 "uuid": "da3a9b8d-f03c-5de3-8237-9222a725fdc9", 00:14:49.677 "is_configured": true, 00:14:49.677 "data_offset": 2048, 00:14:49.677 "data_size": 63488 00:14:49.677 }, 00:14:49.677 { 00:14:49.677 "name": "BaseBdev2", 00:14:49.677 "uuid": "c2ddf1b5-c6c5-566c-a148-34549e0b9ed3", 00:14:49.678 "is_configured": true, 00:14:49.678 "data_offset": 2048, 00:14:49.678 "data_size": 63488 00:14:49.678 } 00:14:49.678 ] 00:14:49.678 }' 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.678 [2024-11-27 04:31:46.191532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.678 "name": "raid_bdev1", 00:14:49.678 "uuid": "c4e34e76-da69-4905-94ab-e7680bfb0db2", 00:14:49.678 "strip_size_kb": 0, 00:14:49.678 "state": "online", 00:14:49.678 "raid_level": "raid1", 00:14:49.678 "superblock": true, 00:14:49.678 "num_base_bdevs": 2, 00:14:49.678 "num_base_bdevs_discovered": 1, 00:14:49.678 "num_base_bdevs_operational": 1, 00:14:49.678 "base_bdevs_list": [ 00:14:49.678 { 00:14:49.678 "name": null, 00:14:49.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.678 "is_configured": false, 00:14:49.678 "data_offset": 0, 00:14:49.678 "data_size": 63488 00:14:49.678 }, 00:14:49.678 { 00:14:49.678 "name": "BaseBdev2", 00:14:49.678 "uuid": "c2ddf1b5-c6c5-566c-a148-34549e0b9ed3", 00:14:49.678 "is_configured": true, 00:14:49.678 "data_offset": 2048, 00:14:49.678 "data_size": 63488 00:14:49.678 } 00:14:49.678 ] 00:14:49.678 }' 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.678 04:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.246 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:50.246 04:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.246 04:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.246 [2024-11-27 04:31:46.666708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:50.246 [2024-11-27 04:31:46.666910] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:50.246 [2024-11-27 04:31:46.666936] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:50.246 [2024-11-27 04:31:46.666971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:50.246 [2024-11-27 04:31:46.682687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:14:50.246 04:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.246 [2024-11-27 04:31:46.684514] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:50.246 04:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:51.185 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.185 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.185 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.185 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.185 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.185 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.185 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.185 04:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.185 04:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.185 04:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.185 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.185 "name": "raid_bdev1", 00:14:51.185 "uuid": "c4e34e76-da69-4905-94ab-e7680bfb0db2", 00:14:51.185 "strip_size_kb": 0, 00:14:51.185 "state": "online", 00:14:51.185 "raid_level": "raid1", 00:14:51.185 "superblock": true, 00:14:51.185 "num_base_bdevs": 2, 00:14:51.185 "num_base_bdevs_discovered": 2, 00:14:51.185 "num_base_bdevs_operational": 2, 00:14:51.185 "process": { 00:14:51.185 "type": "rebuild", 00:14:51.185 "target": "spare", 00:14:51.185 "progress": { 00:14:51.185 "blocks": 20480, 00:14:51.185 "percent": 32 00:14:51.185 } 00:14:51.185 }, 00:14:51.185 "base_bdevs_list": [ 00:14:51.185 { 00:14:51.185 "name": "spare", 00:14:51.185 "uuid": "da3a9b8d-f03c-5de3-8237-9222a725fdc9", 00:14:51.185 "is_configured": true, 00:14:51.185 "data_offset": 2048, 00:14:51.185 "data_size": 63488 00:14:51.185 }, 00:14:51.185 { 00:14:51.185 "name": "BaseBdev2", 00:14:51.185 "uuid": "c2ddf1b5-c6c5-566c-a148-34549e0b9ed3", 00:14:51.185 "is_configured": true, 00:14:51.185 "data_offset": 2048, 00:14:51.185 "data_size": 63488 00:14:51.185 } 00:14:51.185 ] 00:14:51.185 }' 00:14:51.185 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.445 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.445 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.445 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.445 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:51.445 04:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.445 04:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.445 [2024-11-27 04:31:47.848527] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.445 [2024-11-27 04:31:47.890241] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:51.445 [2024-11-27 04:31:47.890365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.445 [2024-11-27 04:31:47.890381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.445 [2024-11-27 04:31:47.890390] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:51.445 04:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.445 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:51.445 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.445 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.445 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.445 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.445 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:51.445 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.445 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.445 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.445 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.445 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.445 04:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.445 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.445 04:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.445 04:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.445 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.445 "name": "raid_bdev1", 00:14:51.445 "uuid": "c4e34e76-da69-4905-94ab-e7680bfb0db2", 00:14:51.445 "strip_size_kb": 0, 00:14:51.445 "state": "online", 00:14:51.445 "raid_level": "raid1", 00:14:51.445 "superblock": true, 00:14:51.445 "num_base_bdevs": 2, 00:14:51.445 "num_base_bdevs_discovered": 1, 00:14:51.445 "num_base_bdevs_operational": 1, 00:14:51.445 "base_bdevs_list": [ 00:14:51.445 { 00:14:51.445 "name": null, 00:14:51.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.445 "is_configured": false, 00:14:51.445 "data_offset": 0, 00:14:51.445 "data_size": 63488 00:14:51.445 }, 00:14:51.445 { 00:14:51.445 "name": "BaseBdev2", 00:14:51.445 "uuid": "c2ddf1b5-c6c5-566c-a148-34549e0b9ed3", 00:14:51.445 "is_configured": true, 00:14:51.445 "data_offset": 2048, 00:14:51.445 "data_size": 63488 00:14:51.445 } 00:14:51.445 ] 00:14:51.445 }' 00:14:51.445 04:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.445 04:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.015 04:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:52.015 04:31:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.015 04:31:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.015 [2024-11-27 04:31:48.351148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:52.015 [2024-11-27 04:31:48.351214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.015 [2024-11-27 04:31:48.351234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:52.015 [2024-11-27 04:31:48.351245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.015 [2024-11-27 04:31:48.351775] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.015 [2024-11-27 04:31:48.351808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:52.015 [2024-11-27 04:31:48.351911] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:52.015 [2024-11-27 04:31:48.351935] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:52.015 [2024-11-27 04:31:48.351947] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:52.015 [2024-11-27 04:31:48.351976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:52.015 [2024-11-27 04:31:48.368715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:52.015 spare 00:14:52.015 04:31:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.015 04:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:52.015 [2024-11-27 04:31:48.370655] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:52.954 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.954 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.954 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.954 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.954 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.954 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.954 04:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.954 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.954 04:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.954 04:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.954 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.954 "name": "raid_bdev1", 00:14:52.954 "uuid": "c4e34e76-da69-4905-94ab-e7680bfb0db2", 00:14:52.954 "strip_size_kb": 0, 00:14:52.954 "state": "online", 00:14:52.954 "raid_level": "raid1", 00:14:52.954 "superblock": true, 00:14:52.954 "num_base_bdevs": 2, 00:14:52.954 "num_base_bdevs_discovered": 2, 00:14:52.954 "num_base_bdevs_operational": 2, 00:14:52.954 "process": { 00:14:52.954 "type": "rebuild", 00:14:52.954 "target": "spare", 00:14:52.954 "progress": { 00:14:52.954 "blocks": 20480, 00:14:52.954 "percent": 32 00:14:52.954 } 00:14:52.954 }, 00:14:52.954 "base_bdevs_list": [ 00:14:52.954 { 00:14:52.954 "name": "spare", 00:14:52.954 "uuid": "da3a9b8d-f03c-5de3-8237-9222a725fdc9", 00:14:52.954 "is_configured": true, 00:14:52.954 "data_offset": 2048, 00:14:52.954 "data_size": 63488 00:14:52.954 }, 00:14:52.954 { 00:14:52.954 "name": "BaseBdev2", 00:14:52.954 "uuid": "c2ddf1b5-c6c5-566c-a148-34549e0b9ed3", 00:14:52.954 "is_configured": true, 00:14:52.954 "data_offset": 2048, 00:14:52.954 "data_size": 63488 00:14:52.954 } 00:14:52.954 ] 00:14:52.954 }' 00:14:52.954 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.954 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:52.954 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.954 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.954 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:52.954 04:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.954 04:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.954 [2024-11-27 04:31:49.502271] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:53.213 [2024-11-27 04:31:49.576574] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:53.213 [2024-11-27 04:31:49.576656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.213 [2024-11-27 04:31:49.576676] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:53.213 [2024-11-27 04:31:49.576684] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:53.213 04:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.213 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:53.213 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.213 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.213 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.213 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.213 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:53.213 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.213 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.213 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.213 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.213 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.213 04:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.213 04:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.214 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.214 04:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.214 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.214 "name": "raid_bdev1", 00:14:53.214 "uuid": "c4e34e76-da69-4905-94ab-e7680bfb0db2", 00:14:53.214 "strip_size_kb": 0, 00:14:53.214 "state": "online", 00:14:53.214 "raid_level": "raid1", 00:14:53.214 "superblock": true, 00:14:53.214 "num_base_bdevs": 2, 00:14:53.214 "num_base_bdevs_discovered": 1, 00:14:53.214 "num_base_bdevs_operational": 1, 00:14:53.214 "base_bdevs_list": [ 00:14:53.214 { 00:14:53.214 "name": null, 00:14:53.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.214 "is_configured": false, 00:14:53.214 "data_offset": 0, 00:14:53.214 "data_size": 63488 00:14:53.214 }, 00:14:53.214 { 00:14:53.214 "name": "BaseBdev2", 00:14:53.214 "uuid": "c2ddf1b5-c6c5-566c-a148-34549e0b9ed3", 00:14:53.214 "is_configured": true, 00:14:53.214 "data_offset": 2048, 00:14:53.214 "data_size": 63488 00:14:53.214 } 00:14:53.214 ] 00:14:53.214 }' 00:14:53.214 04:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.214 04:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.473 04:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:53.473 04:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.473 04:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:53.473 04:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:53.473 04:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.473 04:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.473 04:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.473 04:31:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.473 04:31:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.731 04:31:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.731 04:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.731 "name": "raid_bdev1", 00:14:53.731 "uuid": "c4e34e76-da69-4905-94ab-e7680bfb0db2", 00:14:53.731 "strip_size_kb": 0, 00:14:53.731 "state": "online", 00:14:53.731 "raid_level": "raid1", 00:14:53.731 "superblock": true, 00:14:53.731 "num_base_bdevs": 2, 00:14:53.731 "num_base_bdevs_discovered": 1, 00:14:53.731 "num_base_bdevs_operational": 1, 00:14:53.731 "base_bdevs_list": [ 00:14:53.731 { 00:14:53.731 "name": null, 00:14:53.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.731 "is_configured": false, 00:14:53.731 "data_offset": 0, 00:14:53.731 "data_size": 63488 00:14:53.731 }, 00:14:53.731 { 00:14:53.731 "name": "BaseBdev2", 00:14:53.731 "uuid": "c2ddf1b5-c6c5-566c-a148-34549e0b9ed3", 00:14:53.731 "is_configured": true, 00:14:53.731 "data_offset": 2048, 00:14:53.731 "data_size": 63488 00:14:53.731 } 00:14:53.731 ] 00:14:53.732 }' 00:14:53.732 04:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.732 04:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:53.732 04:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.732 04:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:53.732 04:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:53.732 04:31:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.732 04:31:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.732 04:31:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.732 04:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:53.732 04:31:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.732 04:31:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.732 [2024-11-27 04:31:50.195882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:53.732 [2024-11-27 04:31:50.195956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.732 [2024-11-27 04:31:50.195989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:53.732 [2024-11-27 04:31:50.196012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.732 [2024-11-27 04:31:50.196566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.732 [2024-11-27 04:31:50.196597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:53.732 [2024-11-27 04:31:50.196710] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:53.732 [2024-11-27 04:31:50.196736] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:53.732 [2024-11-27 04:31:50.196748] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:53.732 [2024-11-27 04:31:50.196760] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:53.732 BaseBdev1 00:14:53.732 04:31:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.732 04:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:54.684 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:54.684 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.684 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.684 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.684 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.684 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:54.684 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.684 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.684 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.684 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.684 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.684 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.684 04:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.684 04:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.684 04:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.684 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.684 "name": "raid_bdev1", 00:14:54.684 "uuid": "c4e34e76-da69-4905-94ab-e7680bfb0db2", 00:14:54.684 "strip_size_kb": 0, 00:14:54.684 "state": "online", 00:14:54.684 "raid_level": "raid1", 00:14:54.684 "superblock": true, 00:14:54.684 "num_base_bdevs": 2, 00:14:54.684 "num_base_bdevs_discovered": 1, 00:14:54.684 "num_base_bdevs_operational": 1, 00:14:54.684 "base_bdevs_list": [ 00:14:54.684 { 00:14:54.684 "name": null, 00:14:54.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.684 "is_configured": false, 00:14:54.684 "data_offset": 0, 00:14:54.684 "data_size": 63488 00:14:54.684 }, 00:14:54.684 { 00:14:54.684 "name": "BaseBdev2", 00:14:54.684 "uuid": "c2ddf1b5-c6c5-566c-a148-34549e0b9ed3", 00:14:54.684 "is_configured": true, 00:14:54.684 "data_offset": 2048, 00:14:54.684 "data_size": 63488 00:14:54.684 } 00:14:54.684 ] 00:14:54.684 }' 00:14:54.684 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.684 04:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.257 "name": "raid_bdev1", 00:14:55.257 "uuid": "c4e34e76-da69-4905-94ab-e7680bfb0db2", 00:14:55.257 "strip_size_kb": 0, 00:14:55.257 "state": "online", 00:14:55.257 "raid_level": "raid1", 00:14:55.257 "superblock": true, 00:14:55.257 "num_base_bdevs": 2, 00:14:55.257 "num_base_bdevs_discovered": 1, 00:14:55.257 "num_base_bdevs_operational": 1, 00:14:55.257 "base_bdevs_list": [ 00:14:55.257 { 00:14:55.257 "name": null, 00:14:55.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.257 "is_configured": false, 00:14:55.257 "data_offset": 0, 00:14:55.257 "data_size": 63488 00:14:55.257 }, 00:14:55.257 { 00:14:55.257 "name": "BaseBdev2", 00:14:55.257 "uuid": "c2ddf1b5-c6c5-566c-a148-34549e0b9ed3", 00:14:55.257 "is_configured": true, 00:14:55.257 "data_offset": 2048, 00:14:55.257 "data_size": 63488 00:14:55.257 } 00:14:55.257 ] 00:14:55.257 }' 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.257 [2024-11-27 04:31:51.785339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.257 [2024-11-27 04:31:51.785534] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:55.257 [2024-11-27 04:31:51.785563] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:55.257 request: 00:14:55.257 { 00:14:55.257 "base_bdev": "BaseBdev1", 00:14:55.257 "raid_bdev": "raid_bdev1", 00:14:55.257 "method": "bdev_raid_add_base_bdev", 00:14:55.257 "req_id": 1 00:14:55.257 } 00:14:55.257 Got JSON-RPC error response 00:14:55.257 response: 00:14:55.257 { 00:14:55.257 "code": -22, 00:14:55.257 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:55.257 } 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:55.257 04:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:56.642 04:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:56.642 04:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.642 04:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.642 04:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.642 04:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.642 04:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:56.642 04:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.642 04:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.642 04:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.642 04:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.642 04:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.642 04:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.642 04:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.642 04:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.642 04:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.642 04:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.642 "name": "raid_bdev1", 00:14:56.642 "uuid": "c4e34e76-da69-4905-94ab-e7680bfb0db2", 00:14:56.642 "strip_size_kb": 0, 00:14:56.642 "state": "online", 00:14:56.642 "raid_level": "raid1", 00:14:56.642 "superblock": true, 00:14:56.642 "num_base_bdevs": 2, 00:14:56.642 "num_base_bdevs_discovered": 1, 00:14:56.642 "num_base_bdevs_operational": 1, 00:14:56.642 "base_bdevs_list": [ 00:14:56.642 { 00:14:56.642 "name": null, 00:14:56.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.642 "is_configured": false, 00:14:56.642 "data_offset": 0, 00:14:56.642 "data_size": 63488 00:14:56.642 }, 00:14:56.642 { 00:14:56.642 "name": "BaseBdev2", 00:14:56.642 "uuid": "c2ddf1b5-c6c5-566c-a148-34549e0b9ed3", 00:14:56.642 "is_configured": true, 00:14:56.642 "data_offset": 2048, 00:14:56.642 "data_size": 63488 00:14:56.642 } 00:14:56.642 ] 00:14:56.642 }' 00:14:56.642 04:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.642 04:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.642 04:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:56.642 04:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.642 04:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:56.642 04:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:56.642 04:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.642 04:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.643 04:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.643 04:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.643 04:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.903 04:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.903 04:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.903 "name": "raid_bdev1", 00:14:56.903 "uuid": "c4e34e76-da69-4905-94ab-e7680bfb0db2", 00:14:56.903 "strip_size_kb": 0, 00:14:56.903 "state": "online", 00:14:56.903 "raid_level": "raid1", 00:14:56.903 "superblock": true, 00:14:56.903 "num_base_bdevs": 2, 00:14:56.903 "num_base_bdevs_discovered": 1, 00:14:56.903 "num_base_bdevs_operational": 1, 00:14:56.903 "base_bdevs_list": [ 00:14:56.903 { 00:14:56.903 "name": null, 00:14:56.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.903 "is_configured": false, 00:14:56.903 "data_offset": 0, 00:14:56.903 "data_size": 63488 00:14:56.903 }, 00:14:56.903 { 00:14:56.903 "name": "BaseBdev2", 00:14:56.903 "uuid": "c2ddf1b5-c6c5-566c-a148-34549e0b9ed3", 00:14:56.903 "is_configured": true, 00:14:56.903 "data_offset": 2048, 00:14:56.903 "data_size": 63488 00:14:56.903 } 00:14:56.903 ] 00:14:56.903 }' 00:14:56.903 04:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.903 04:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:56.903 04:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.903 04:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:56.903 04:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76046 00:14:56.903 04:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 76046 ']' 00:14:56.903 04:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 76046 00:14:56.903 04:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:56.903 04:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:56.903 04:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76046 00:14:56.903 killing process with pid 76046 00:14:56.903 Received shutdown signal, test time was about 60.000000 seconds 00:14:56.903 00:14:56.903 Latency(us) 00:14:56.903 [2024-11-27T04:31:53.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.903 [2024-11-27T04:31:53.490Z] =================================================================================================================== 00:14:56.903 [2024-11-27T04:31:53.490Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:56.903 04:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:56.903 04:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:56.903 04:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76046' 00:14:56.903 04:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 76046 00:14:56.903 [2024-11-27 04:31:53.404637] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:56.903 [2024-11-27 04:31:53.404771] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:56.903 04:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 76046 00:14:56.903 [2024-11-27 04:31:53.404825] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:56.903 [2024-11-27 04:31:53.404838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:57.162 [2024-11-27 04:31:53.727953] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:58.539 04:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:58.539 00:14:58.539 real 0m23.890s 00:14:58.539 user 0m28.893s 00:14:58.539 sys 0m3.789s 00:14:58.539 04:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:58.539 04:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.539 ************************************ 00:14:58.539 END TEST raid_rebuild_test_sb 00:14:58.539 ************************************ 00:14:58.539 04:31:54 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:14:58.540 04:31:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:58.540 04:31:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.540 04:31:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:58.540 ************************************ 00:14:58.540 START TEST raid_rebuild_test_io 00:14:58.540 ************************************ 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76783 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76783 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76783 ']' 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.540 04:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.540 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:58.540 Zero copy mechanism will not be used. 00:14:58.540 [2024-11-27 04:31:55.119890] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:14:58.540 [2024-11-27 04:31:55.120013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76783 ] 00:14:58.799 [2024-11-27 04:31:55.301117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.057 [2024-11-27 04:31:55.421187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.057 [2024-11-27 04:31:55.628767] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.057 [2024-11-27 04:31:55.628818] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.624 04:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:59.624 04:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:59.624 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:59.624 04:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:59.624 04:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.625 04:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.625 BaseBdev1_malloc 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.625 [2024-11-27 04:31:56.049723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:59.625 [2024-11-27 04:31:56.049785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.625 [2024-11-27 04:31:56.049808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:59.625 [2024-11-27 04:31:56.049821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.625 [2024-11-27 04:31:56.052179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.625 [2024-11-27 04:31:56.052228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:59.625 BaseBdev1 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.625 BaseBdev2_malloc 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.625 [2024-11-27 04:31:56.106058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:59.625 [2024-11-27 04:31:56.106142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.625 [2024-11-27 04:31:56.106168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:59.625 [2024-11-27 04:31:56.106179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.625 [2024-11-27 04:31:56.108416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.625 [2024-11-27 04:31:56.108453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:59.625 BaseBdev2 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.625 spare_malloc 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.625 spare_delay 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.625 [2024-11-27 04:31:56.184748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:59.625 [2024-11-27 04:31:56.184807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.625 [2024-11-27 04:31:56.184828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:59.625 [2024-11-27 04:31:56.184838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.625 [2024-11-27 04:31:56.187320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.625 [2024-11-27 04:31:56.187360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:59.625 spare 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.625 [2024-11-27 04:31:56.192794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:59.625 [2024-11-27 04:31:56.194842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:59.625 [2024-11-27 04:31:56.194981] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:59.625 [2024-11-27 04:31:56.195011] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:59.625 [2024-11-27 04:31:56.195345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:59.625 [2024-11-27 04:31:56.195589] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:59.625 [2024-11-27 04:31:56.195613] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:59.625 [2024-11-27 04:31:56.195795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.625 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.883 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.883 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.883 "name": "raid_bdev1", 00:14:59.883 "uuid": "c17245b8-9193-4bf0-b64a-b8dbc5ac4690", 00:14:59.883 "strip_size_kb": 0, 00:14:59.883 "state": "online", 00:14:59.883 "raid_level": "raid1", 00:14:59.883 "superblock": false, 00:14:59.883 "num_base_bdevs": 2, 00:14:59.883 "num_base_bdevs_discovered": 2, 00:14:59.883 "num_base_bdevs_operational": 2, 00:14:59.883 "base_bdevs_list": [ 00:14:59.883 { 00:14:59.883 "name": "BaseBdev1", 00:14:59.883 "uuid": "0dd2c51d-0119-5b9b-a53e-563efac05ac4", 00:14:59.883 "is_configured": true, 00:14:59.883 "data_offset": 0, 00:14:59.883 "data_size": 65536 00:14:59.883 }, 00:14:59.883 { 00:14:59.883 "name": "BaseBdev2", 00:14:59.883 "uuid": "f8f49fda-34d5-5944-83e2-3a4b1e653c9f", 00:14:59.883 "is_configured": true, 00:14:59.883 "data_offset": 0, 00:14:59.883 "data_size": 65536 00:14:59.883 } 00:14:59.883 ] 00:14:59.883 }' 00:14:59.883 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.883 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.141 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:00.141 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.141 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:00.141 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.141 [2024-11-27 04:31:56.648352] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.141 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.141 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:00.141 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.141 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:00.141 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.141 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.141 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.400 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:00.401 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:00.401 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:00.401 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:00.401 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.401 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.401 [2024-11-27 04:31:56.747897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:00.401 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.401 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:00.401 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.401 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.401 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.401 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.401 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:00.401 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.401 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.401 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.401 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.401 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.401 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.401 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.401 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.401 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.401 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.401 "name": "raid_bdev1", 00:15:00.401 "uuid": "c17245b8-9193-4bf0-b64a-b8dbc5ac4690", 00:15:00.401 "strip_size_kb": 0, 00:15:00.401 "state": "online", 00:15:00.401 "raid_level": "raid1", 00:15:00.401 "superblock": false, 00:15:00.401 "num_base_bdevs": 2, 00:15:00.401 "num_base_bdevs_discovered": 1, 00:15:00.401 "num_base_bdevs_operational": 1, 00:15:00.401 "base_bdevs_list": [ 00:15:00.401 { 00:15:00.401 "name": null, 00:15:00.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.401 "is_configured": false, 00:15:00.401 "data_offset": 0, 00:15:00.401 "data_size": 65536 00:15:00.401 }, 00:15:00.401 { 00:15:00.401 "name": "BaseBdev2", 00:15:00.401 "uuid": "f8f49fda-34d5-5944-83e2-3a4b1e653c9f", 00:15:00.401 "is_configured": true, 00:15:00.401 "data_offset": 0, 00:15:00.401 "data_size": 65536 00:15:00.401 } 00:15:00.401 ] 00:15:00.401 }' 00:15:00.401 04:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.401 04:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.401 [2024-11-27 04:31:56.873045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:00.401 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:00.401 Zero copy mechanism will not be used. 00:15:00.401 Running I/O for 60 seconds... 00:15:00.660 04:31:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:00.660 04:31:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.660 04:31:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.660 [2024-11-27 04:31:57.195200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:00.660 04:31:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.660 04:31:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:00.918 [2024-11-27 04:31:57.267712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:00.918 [2024-11-27 04:31:57.269888] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:00.918 [2024-11-27 04:31:57.386803] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:00.918 [2024-11-27 04:31:57.387493] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:01.177 [2024-11-27 04:31:57.598742] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:01.177 [2024-11-27 04:31:57.599166] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:01.436 141.00 IOPS, 423.00 MiB/s [2024-11-27T04:31:58.023Z] [2024-11-27 04:31:57.928891] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:01.695 [2024-11-27 04:31:58.153032] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:01.695 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:01.695 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.695 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:01.695 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:01.695 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.695 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.695 04:31:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.695 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.695 04:31:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.695 04:31:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.954 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.954 "name": "raid_bdev1", 00:15:01.954 "uuid": "c17245b8-9193-4bf0-b64a-b8dbc5ac4690", 00:15:01.954 "strip_size_kb": 0, 00:15:01.954 "state": "online", 00:15:01.954 "raid_level": "raid1", 00:15:01.954 "superblock": false, 00:15:01.954 "num_base_bdevs": 2, 00:15:01.954 "num_base_bdevs_discovered": 2, 00:15:01.954 "num_base_bdevs_operational": 2, 00:15:01.954 "process": { 00:15:01.954 "type": "rebuild", 00:15:01.954 "target": "spare", 00:15:01.954 "progress": { 00:15:01.954 "blocks": 10240, 00:15:01.954 "percent": 15 00:15:01.954 } 00:15:01.954 }, 00:15:01.954 "base_bdevs_list": [ 00:15:01.954 { 00:15:01.954 "name": "spare", 00:15:01.954 "uuid": "4d51935a-88d8-5255-9033-0d7129970dd8", 00:15:01.954 "is_configured": true, 00:15:01.954 "data_offset": 0, 00:15:01.954 "data_size": 65536 00:15:01.954 }, 00:15:01.954 { 00:15:01.954 "name": "BaseBdev2", 00:15:01.954 "uuid": "f8f49fda-34d5-5944-83e2-3a4b1e653c9f", 00:15:01.954 "is_configured": true, 00:15:01.954 "data_offset": 0, 00:15:01.954 "data_size": 65536 00:15:01.954 } 00:15:01.954 ] 00:15:01.954 }' 00:15:01.954 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.954 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.954 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.954 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.954 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:01.954 04:31:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.954 04:31:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.954 [2024-11-27 04:31:58.389482] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:01.954 [2024-11-27 04:31:58.489211] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:01.954 [2024-11-27 04:31:58.489865] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:02.212 [2024-11-27 04:31:58.597741] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:02.212 [2024-11-27 04:31:58.600733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.212 [2024-11-27 04:31:58.600785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:02.212 [2024-11-27 04:31:58.600802] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:02.212 [2024-11-27 04:31:58.635034] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:15:02.212 04:31:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.212 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:02.212 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.212 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.212 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.212 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.212 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:02.212 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.212 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.212 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.212 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.212 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.212 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.212 04:31:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.212 04:31:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.212 04:31:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.212 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.212 "name": "raid_bdev1", 00:15:02.212 "uuid": "c17245b8-9193-4bf0-b64a-b8dbc5ac4690", 00:15:02.212 "strip_size_kb": 0, 00:15:02.212 "state": "online", 00:15:02.212 "raid_level": "raid1", 00:15:02.212 "superblock": false, 00:15:02.212 "num_base_bdevs": 2, 00:15:02.212 "num_base_bdevs_discovered": 1, 00:15:02.212 "num_base_bdevs_operational": 1, 00:15:02.212 "base_bdevs_list": [ 00:15:02.212 { 00:15:02.212 "name": null, 00:15:02.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.212 "is_configured": false, 00:15:02.212 "data_offset": 0, 00:15:02.212 "data_size": 65536 00:15:02.212 }, 00:15:02.212 { 00:15:02.212 "name": "BaseBdev2", 00:15:02.212 "uuid": "f8f49fda-34d5-5944-83e2-3a4b1e653c9f", 00:15:02.212 "is_configured": true, 00:15:02.212 "data_offset": 0, 00:15:02.212 "data_size": 65536 00:15:02.212 } 00:15:02.212 ] 00:15:02.212 }' 00:15:02.212 04:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.212 04:31:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.471 143.50 IOPS, 430.50 MiB/s [2024-11-27T04:31:59.058Z] 04:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:02.471 04:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.471 04:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:02.471 04:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:02.471 04:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.471 04:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.471 04:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.471 04:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.471 04:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.734 04:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.734 04:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.734 "name": "raid_bdev1", 00:15:02.734 "uuid": "c17245b8-9193-4bf0-b64a-b8dbc5ac4690", 00:15:02.735 "strip_size_kb": 0, 00:15:02.735 "state": "online", 00:15:02.735 "raid_level": "raid1", 00:15:02.735 "superblock": false, 00:15:02.735 "num_base_bdevs": 2, 00:15:02.735 "num_base_bdevs_discovered": 1, 00:15:02.735 "num_base_bdevs_operational": 1, 00:15:02.735 "base_bdevs_list": [ 00:15:02.735 { 00:15:02.735 "name": null, 00:15:02.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.735 "is_configured": false, 00:15:02.735 "data_offset": 0, 00:15:02.735 "data_size": 65536 00:15:02.735 }, 00:15:02.735 { 00:15:02.735 "name": "BaseBdev2", 00:15:02.735 "uuid": "f8f49fda-34d5-5944-83e2-3a4b1e653c9f", 00:15:02.735 "is_configured": true, 00:15:02.735 "data_offset": 0, 00:15:02.735 "data_size": 65536 00:15:02.735 } 00:15:02.735 ] 00:15:02.735 }' 00:15:02.735 04:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.735 04:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:02.735 04:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.735 04:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:02.735 04:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:02.735 04:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.735 04:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.735 [2024-11-27 04:31:59.202245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:02.735 04:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.735 04:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:02.736 [2024-11-27 04:31:59.256478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:02.736 [2024-11-27 04:31:59.258542] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:02.997 [2024-11-27 04:31:59.384169] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:03.256 [2024-11-27 04:31:59.595205] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:03.256 [2024-11-27 04:31:59.595577] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:03.515 165.33 IOPS, 496.00 MiB/s [2024-11-27T04:32:00.102Z] [2024-11-27 04:31:59.921401] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:03.773 [2024-11-27 04:32:00.123679] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:03.773 [2024-11-27 04:32:00.124054] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:03.773 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.773 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.773 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.773 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.773 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.773 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.773 04:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.773 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.773 04:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.773 04:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.773 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.773 "name": "raid_bdev1", 00:15:03.773 "uuid": "c17245b8-9193-4bf0-b64a-b8dbc5ac4690", 00:15:03.773 "strip_size_kb": 0, 00:15:03.773 "state": "online", 00:15:03.773 "raid_level": "raid1", 00:15:03.773 "superblock": false, 00:15:03.773 "num_base_bdevs": 2, 00:15:03.773 "num_base_bdevs_discovered": 2, 00:15:03.773 "num_base_bdevs_operational": 2, 00:15:03.773 "process": { 00:15:03.773 "type": "rebuild", 00:15:03.773 "target": "spare", 00:15:03.773 "progress": { 00:15:03.773 "blocks": 10240, 00:15:03.773 "percent": 15 00:15:03.773 } 00:15:03.773 }, 00:15:03.773 "base_bdevs_list": [ 00:15:03.773 { 00:15:03.773 "name": "spare", 00:15:03.773 "uuid": "4d51935a-88d8-5255-9033-0d7129970dd8", 00:15:03.773 "is_configured": true, 00:15:03.773 "data_offset": 0, 00:15:03.773 "data_size": 65536 00:15:03.773 }, 00:15:03.773 { 00:15:03.773 "name": "BaseBdev2", 00:15:03.773 "uuid": "f8f49fda-34d5-5944-83e2-3a4b1e653c9f", 00:15:03.773 "is_configured": true, 00:15:03.773 "data_offset": 0, 00:15:03.773 "data_size": 65536 00:15:03.773 } 00:15:03.773 ] 00:15:03.773 }' 00:15:03.773 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.773 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.773 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.032 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.032 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:04.032 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:04.032 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:04.032 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:04.032 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=428 00:15:04.032 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:04.032 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.032 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.032 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.032 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.032 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.032 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.032 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.032 04:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.032 04:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.032 04:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.032 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.032 "name": "raid_bdev1", 00:15:04.032 "uuid": "c17245b8-9193-4bf0-b64a-b8dbc5ac4690", 00:15:04.032 "strip_size_kb": 0, 00:15:04.032 "state": "online", 00:15:04.032 "raid_level": "raid1", 00:15:04.032 "superblock": false, 00:15:04.032 "num_base_bdevs": 2, 00:15:04.032 "num_base_bdevs_discovered": 2, 00:15:04.032 "num_base_bdevs_operational": 2, 00:15:04.032 "process": { 00:15:04.032 "type": "rebuild", 00:15:04.032 "target": "spare", 00:15:04.032 "progress": { 00:15:04.032 "blocks": 12288, 00:15:04.032 "percent": 18 00:15:04.032 } 00:15:04.032 }, 00:15:04.032 "base_bdevs_list": [ 00:15:04.032 { 00:15:04.032 "name": "spare", 00:15:04.032 "uuid": "4d51935a-88d8-5255-9033-0d7129970dd8", 00:15:04.032 "is_configured": true, 00:15:04.032 "data_offset": 0, 00:15:04.032 "data_size": 65536 00:15:04.032 }, 00:15:04.032 { 00:15:04.032 "name": "BaseBdev2", 00:15:04.032 "uuid": "f8f49fda-34d5-5944-83e2-3a4b1e653c9f", 00:15:04.032 "is_configured": true, 00:15:04.032 "data_offset": 0, 00:15:04.032 "data_size": 65536 00:15:04.032 } 00:15:04.032 ] 00:15:04.032 }' 00:15:04.032 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.032 [2024-11-27 04:32:00.444174] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:04.032 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.032 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.032 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.032 04:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:04.858 137.75 IOPS, 413.25 MiB/s [2024-11-27T04:32:01.445Z] [2024-11-27 04:32:01.219241] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:04.858 [2024-11-27 04:32:01.219579] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:05.117 04:32:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:05.117 04:32:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.117 04:32:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.117 04:32:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.117 04:32:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.117 04:32:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.117 04:32:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.117 04:32:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.117 04:32:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.117 04:32:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.117 [2024-11-27 04:32:01.555535] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:05.117 04:32:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.117 04:32:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.117 "name": "raid_bdev1", 00:15:05.117 "uuid": "c17245b8-9193-4bf0-b64a-b8dbc5ac4690", 00:15:05.117 "strip_size_kb": 0, 00:15:05.117 "state": "online", 00:15:05.117 "raid_level": "raid1", 00:15:05.117 "superblock": false, 00:15:05.117 "num_base_bdevs": 2, 00:15:05.117 "num_base_bdevs_discovered": 2, 00:15:05.117 "num_base_bdevs_operational": 2, 00:15:05.117 "process": { 00:15:05.117 "type": "rebuild", 00:15:05.117 "target": "spare", 00:15:05.117 "progress": { 00:15:05.117 "blocks": 30720, 00:15:05.117 "percent": 46 00:15:05.117 } 00:15:05.117 }, 00:15:05.117 "base_bdevs_list": [ 00:15:05.117 { 00:15:05.117 "name": "spare", 00:15:05.117 "uuid": "4d51935a-88d8-5255-9033-0d7129970dd8", 00:15:05.117 "is_configured": true, 00:15:05.117 "data_offset": 0, 00:15:05.117 "data_size": 65536 00:15:05.117 }, 00:15:05.117 { 00:15:05.117 "name": "BaseBdev2", 00:15:05.117 "uuid": "f8f49fda-34d5-5944-83e2-3a4b1e653c9f", 00:15:05.117 "is_configured": true, 00:15:05.117 "data_offset": 0, 00:15:05.117 "data_size": 65536 00:15:05.117 } 00:15:05.117 ] 00:15:05.117 }' 00:15:05.117 04:32:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.117 04:32:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.117 04:32:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.117 [2024-11-27 04:32:01.671598] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:05.117 04:32:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.117 04:32:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:05.375 121.20 IOPS, 363.60 MiB/s [2024-11-27T04:32:01.962Z] [2024-11-27 04:32:01.889688] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:05.633 [2024-11-27 04:32:02.021482] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:06.201 [2024-11-27 04:32:02.662831] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:06.201 04:32:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:06.201 04:32:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.201 04:32:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.201 04:32:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.201 04:32:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.201 04:32:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.201 04:32:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.201 04:32:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.201 04:32:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.201 04:32:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.201 04:32:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.201 04:32:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.201 "name": "raid_bdev1", 00:15:06.201 "uuid": "c17245b8-9193-4bf0-b64a-b8dbc5ac4690", 00:15:06.201 "strip_size_kb": 0, 00:15:06.201 "state": "online", 00:15:06.201 "raid_level": "raid1", 00:15:06.201 "superblock": false, 00:15:06.201 "num_base_bdevs": 2, 00:15:06.201 "num_base_bdevs_discovered": 2, 00:15:06.201 "num_base_bdevs_operational": 2, 00:15:06.201 "process": { 00:15:06.201 "type": "rebuild", 00:15:06.201 "target": "spare", 00:15:06.201 "progress": { 00:15:06.201 "blocks": 51200, 00:15:06.201 "percent": 78 00:15:06.201 } 00:15:06.201 }, 00:15:06.201 "base_bdevs_list": [ 00:15:06.201 { 00:15:06.201 "name": "spare", 00:15:06.201 "uuid": "4d51935a-88d8-5255-9033-0d7129970dd8", 00:15:06.201 "is_configured": true, 00:15:06.201 "data_offset": 0, 00:15:06.201 "data_size": 65536 00:15:06.201 }, 00:15:06.201 { 00:15:06.201 "name": "BaseBdev2", 00:15:06.201 "uuid": "f8f49fda-34d5-5944-83e2-3a4b1e653c9f", 00:15:06.201 "is_configured": true, 00:15:06.201 "data_offset": 0, 00:15:06.201 "data_size": 65536 00:15:06.201 } 00:15:06.201 ] 00:15:06.201 }' 00:15:06.201 04:32:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.460 [2024-11-27 04:32:02.785967] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:15:06.460 04:32:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.460 04:32:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.461 04:32:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.461 04:32:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:06.720 109.00 IOPS, 327.00 MiB/s [2024-11-27T04:32:03.307Z] [2024-11-27 04:32:03.130345] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:15:06.980 [2024-11-27 04:32:03.337386] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:06.980 [2024-11-27 04:32:03.337743] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:07.240 [2024-11-27 04:32:03.777155] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:07.500 04:32:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:07.500 04:32:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.500 04:32:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.500 04:32:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.500 04:32:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.500 04:32:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.500 04:32:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.500 04:32:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.500 04:32:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.500 04:32:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.500 04:32:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.500 98.29 IOPS, 294.86 MiB/s [2024-11-27T04:32:04.087Z] [2024-11-27 04:32:03.876919] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:07.500 [2024-11-27 04:32:03.885304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.500 04:32:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.500 "name": "raid_bdev1", 00:15:07.500 "uuid": "c17245b8-9193-4bf0-b64a-b8dbc5ac4690", 00:15:07.500 "strip_size_kb": 0, 00:15:07.500 "state": "online", 00:15:07.500 "raid_level": "raid1", 00:15:07.500 "superblock": false, 00:15:07.500 "num_base_bdevs": 2, 00:15:07.500 "num_base_bdevs_discovered": 2, 00:15:07.500 "num_base_bdevs_operational": 2, 00:15:07.500 "process": { 00:15:07.500 "type": "rebuild", 00:15:07.500 "target": "spare", 00:15:07.500 "progress": { 00:15:07.500 "blocks": 65536, 00:15:07.500 "percent": 100 00:15:07.500 } 00:15:07.500 }, 00:15:07.500 "base_bdevs_list": [ 00:15:07.500 { 00:15:07.500 "name": "spare", 00:15:07.500 "uuid": "4d51935a-88d8-5255-9033-0d7129970dd8", 00:15:07.500 "is_configured": true, 00:15:07.500 "data_offset": 0, 00:15:07.500 "data_size": 65536 00:15:07.500 }, 00:15:07.500 { 00:15:07.500 "name": "BaseBdev2", 00:15:07.500 "uuid": "f8f49fda-34d5-5944-83e2-3a4b1e653c9f", 00:15:07.500 "is_configured": true, 00:15:07.500 "data_offset": 0, 00:15:07.500 "data_size": 65536 00:15:07.500 } 00:15:07.500 ] 00:15:07.500 }' 00:15:07.500 04:32:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.500 04:32:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.500 04:32:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.500 04:32:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.500 04:32:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:08.439 89.75 IOPS, 269.25 MiB/s [2024-11-27T04:32:05.026Z] 04:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:08.439 04:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.439 04:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.439 04:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.439 04:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.439 04:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.439 04:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.439 04:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.439 04:32:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.439 04:32:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.439 04:32:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.699 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.699 "name": "raid_bdev1", 00:15:08.699 "uuid": "c17245b8-9193-4bf0-b64a-b8dbc5ac4690", 00:15:08.699 "strip_size_kb": 0, 00:15:08.699 "state": "online", 00:15:08.699 "raid_level": "raid1", 00:15:08.699 "superblock": false, 00:15:08.699 "num_base_bdevs": 2, 00:15:08.699 "num_base_bdevs_discovered": 2, 00:15:08.699 "num_base_bdevs_operational": 2, 00:15:08.699 "base_bdevs_list": [ 00:15:08.699 { 00:15:08.699 "name": "spare", 00:15:08.699 "uuid": "4d51935a-88d8-5255-9033-0d7129970dd8", 00:15:08.699 "is_configured": true, 00:15:08.699 "data_offset": 0, 00:15:08.699 "data_size": 65536 00:15:08.699 }, 00:15:08.699 { 00:15:08.699 "name": "BaseBdev2", 00:15:08.699 "uuid": "f8f49fda-34d5-5944-83e2-3a4b1e653c9f", 00:15:08.699 "is_configured": true, 00:15:08.699 "data_offset": 0, 00:15:08.699 "data_size": 65536 00:15:08.699 } 00:15:08.699 ] 00:15:08.699 }' 00:15:08.699 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.699 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:08.699 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.699 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:08.699 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:08.699 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:08.699 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.699 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:08.699 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:08.699 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.699 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.700 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.700 04:32:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.700 04:32:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.700 04:32:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.700 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.700 "name": "raid_bdev1", 00:15:08.700 "uuid": "c17245b8-9193-4bf0-b64a-b8dbc5ac4690", 00:15:08.700 "strip_size_kb": 0, 00:15:08.700 "state": "online", 00:15:08.700 "raid_level": "raid1", 00:15:08.700 "superblock": false, 00:15:08.700 "num_base_bdevs": 2, 00:15:08.700 "num_base_bdevs_discovered": 2, 00:15:08.700 "num_base_bdevs_operational": 2, 00:15:08.700 "base_bdevs_list": [ 00:15:08.700 { 00:15:08.700 "name": "spare", 00:15:08.700 "uuid": "4d51935a-88d8-5255-9033-0d7129970dd8", 00:15:08.700 "is_configured": true, 00:15:08.700 "data_offset": 0, 00:15:08.700 "data_size": 65536 00:15:08.700 }, 00:15:08.700 { 00:15:08.700 "name": "BaseBdev2", 00:15:08.700 "uuid": "f8f49fda-34d5-5944-83e2-3a4b1e653c9f", 00:15:08.700 "is_configured": true, 00:15:08.700 "data_offset": 0, 00:15:08.700 "data_size": 65536 00:15:08.700 } 00:15:08.700 ] 00:15:08.700 }' 00:15:08.700 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.700 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:08.700 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.700 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:08.700 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:08.700 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.700 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.700 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.700 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.700 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:08.700 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.700 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.700 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.700 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.959 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.959 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.959 04:32:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.959 04:32:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.959 04:32:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.959 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.959 "name": "raid_bdev1", 00:15:08.959 "uuid": "c17245b8-9193-4bf0-b64a-b8dbc5ac4690", 00:15:08.959 "strip_size_kb": 0, 00:15:08.959 "state": "online", 00:15:08.959 "raid_level": "raid1", 00:15:08.959 "superblock": false, 00:15:08.959 "num_base_bdevs": 2, 00:15:08.959 "num_base_bdevs_discovered": 2, 00:15:08.959 "num_base_bdevs_operational": 2, 00:15:08.959 "base_bdevs_list": [ 00:15:08.959 { 00:15:08.959 "name": "spare", 00:15:08.959 "uuid": "4d51935a-88d8-5255-9033-0d7129970dd8", 00:15:08.959 "is_configured": true, 00:15:08.959 "data_offset": 0, 00:15:08.959 "data_size": 65536 00:15:08.959 }, 00:15:08.959 { 00:15:08.959 "name": "BaseBdev2", 00:15:08.959 "uuid": "f8f49fda-34d5-5944-83e2-3a4b1e653c9f", 00:15:08.959 "is_configured": true, 00:15:08.959 "data_offset": 0, 00:15:08.959 "data_size": 65536 00:15:08.959 } 00:15:08.959 ] 00:15:08.959 }' 00:15:08.959 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.959 04:32:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.219 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:09.219 04:32:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.219 04:32:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.219 [2024-11-27 04:32:05.741439] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:09.219 [2024-11-27 04:32:05.741483] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:09.510 00:15:09.510 Latency(us) 00:15:09.510 [2024-11-27T04:32:06.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.510 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:09.510 raid_bdev1 : 8.99 83.86 251.57 0.00 0.00 17200.43 298.70 113099.68 00:15:09.510 [2024-11-27T04:32:06.097Z] =================================================================================================================== 00:15:09.510 [2024-11-27T04:32:06.097Z] Total : 83.86 251.57 0.00 0.00 17200.43 298.70 113099.68 00:15:09.510 [2024-11-27 04:32:05.872857] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.510 [2024-11-27 04:32:05.872943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.510 [2024-11-27 04:32:05.873027] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:09.510 [2024-11-27 04:32:05.873040] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:09.510 { 00:15:09.510 "results": [ 00:15:09.510 { 00:15:09.510 "job": "raid_bdev1", 00:15:09.510 "core_mask": "0x1", 00:15:09.510 "workload": "randrw", 00:15:09.510 "percentage": 50, 00:15:09.510 "status": "finished", 00:15:09.510 "queue_depth": 2, 00:15:09.510 "io_size": 3145728, 00:15:09.510 "runtime": 8.991528, 00:15:09.510 "iops": 83.85671489873579, 00:15:09.510 "mibps": 251.57014469620736, 00:15:09.510 "io_failed": 0, 00:15:09.510 "io_timeout": 0, 00:15:09.510 "avg_latency_us": 17200.42807269526, 00:15:09.510 "min_latency_us": 298.70393013100437, 00:15:09.510 "max_latency_us": 113099.68209606987 00:15:09.510 } 00:15:09.510 ], 00:15:09.510 "core_count": 1 00:15:09.510 } 00:15:09.510 04:32:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.510 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.510 04:32:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.510 04:32:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.510 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:09.510 04:32:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.510 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:09.510 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:09.510 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:09.510 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:09.510 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:09.510 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:09.510 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:09.510 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:09.510 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:09.510 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:09.510 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:09.510 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:09.510 04:32:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:09.770 /dev/nbd0 00:15:09.770 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:09.770 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:09.770 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:09.770 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:09.770 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:09.770 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:09.770 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:09.771 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:09.771 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:09.771 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:09.771 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:09.771 1+0 records in 00:15:09.771 1+0 records out 00:15:09.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359579 s, 11.4 MB/s 00:15:09.771 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.771 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:09.771 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.771 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:09.771 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:09.771 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:09.771 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:09.771 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:09.771 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:15:09.771 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:15:09.771 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:09.771 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:15:09.771 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:09.771 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:09.771 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:09.771 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:09.771 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:09.771 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:09.771 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:15:10.028 /dev/nbd1 00:15:10.028 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:10.028 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:10.028 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:10.029 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:10.029 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:10.029 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:10.029 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:10.029 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:10.029 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:10.029 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:10.029 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:10.029 1+0 records in 00:15:10.029 1+0 records out 00:15:10.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404659 s, 10.1 MB/s 00:15:10.029 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.029 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:10.029 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.029 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:10.029 04:32:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:10.029 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:10.029 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:10.029 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:10.287 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:10.287 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:10.287 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:10.287 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:10.287 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:10.287 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:10.287 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:10.547 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:10.547 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:10.547 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:10.547 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:10.547 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:10.547 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:10.547 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:10.547 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:10.547 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:10.547 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:10.547 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:10.547 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:10.547 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:10.547 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:10.547 04:32:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:10.806 04:32:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:10.806 04:32:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:10.806 04:32:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:10.806 04:32:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:10.806 04:32:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:10.806 04:32:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:10.806 04:32:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:10.806 04:32:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:10.806 04:32:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:10.806 04:32:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76783 00:15:10.806 04:32:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76783 ']' 00:15:10.806 04:32:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76783 00:15:10.806 04:32:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:15:10.806 04:32:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:10.806 04:32:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76783 00:15:10.806 killing process with pid 76783 00:15:10.806 Received shutdown signal, test time was about 10.335638 seconds 00:15:10.806 00:15:10.806 Latency(us) 00:15:10.806 [2024-11-27T04:32:07.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.806 [2024-11-27T04:32:07.393Z] =================================================================================================================== 00:15:10.806 [2024-11-27T04:32:07.393Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:10.806 04:32:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:10.806 04:32:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:10.806 04:32:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76783' 00:15:10.806 04:32:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76783 00:15:10.806 [2024-11-27 04:32:07.191028] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:10.806 04:32:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76783 00:15:11.065 [2024-11-27 04:32:07.443208] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:12.445 00:15:12.445 real 0m13.694s 00:15:12.445 user 0m17.120s 00:15:12.445 sys 0m1.549s 00:15:12.445 ************************************ 00:15:12.445 END TEST raid_rebuild_test_io 00:15:12.445 ************************************ 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.445 04:32:08 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:15:12.445 04:32:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:12.445 04:32:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.445 04:32:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:12.445 ************************************ 00:15:12.445 START TEST raid_rebuild_test_sb_io 00:15:12.445 ************************************ 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77183 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77183 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77183 ']' 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:12.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:12.445 04:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.445 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:12.445 Zero copy mechanism will not be used. 00:15:12.445 [2024-11-27 04:32:08.871387] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:15:12.446 [2024-11-27 04:32:08.871534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77183 ] 00:15:12.705 [2024-11-27 04:32:09.036144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.705 [2024-11-27 04:32:09.159903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.964 [2024-11-27 04:32:09.385739] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:12.964 [2024-11-27 04:32:09.385796] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.231 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:13.231 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:15:13.231 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:13.231 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:13.231 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.231 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.491 BaseBdev1_malloc 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.491 [2024-11-27 04:32:09.824212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:13.491 [2024-11-27 04:32:09.824303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.491 [2024-11-27 04:32:09.824329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:13.491 [2024-11-27 04:32:09.824342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.491 [2024-11-27 04:32:09.826721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.491 [2024-11-27 04:32:09.826767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:13.491 BaseBdev1 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.491 BaseBdev2_malloc 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.491 [2024-11-27 04:32:09.881178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:13.491 [2024-11-27 04:32:09.881248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.491 [2024-11-27 04:32:09.881271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:13.491 [2024-11-27 04:32:09.881282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.491 [2024-11-27 04:32:09.883454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.491 [2024-11-27 04:32:09.883519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:13.491 BaseBdev2 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.491 spare_malloc 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.491 spare_delay 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.491 [2024-11-27 04:32:09.966697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:13.491 [2024-11-27 04:32:09.966780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.491 [2024-11-27 04:32:09.966806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:13.491 [2024-11-27 04:32:09.966818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.491 [2024-11-27 04:32:09.969244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.491 [2024-11-27 04:32:09.969294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:13.491 spare 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.491 [2024-11-27 04:32:09.978722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:13.491 [2024-11-27 04:32:09.980729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:13.491 [2024-11-27 04:32:09.980911] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:13.491 [2024-11-27 04:32:09.980927] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:13.491 [2024-11-27 04:32:09.981226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:13.491 [2024-11-27 04:32:09.981421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:13.491 [2024-11-27 04:32:09.981437] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:13.491 [2024-11-27 04:32:09.981632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.491 04:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.491 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.491 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.491 "name": "raid_bdev1", 00:15:13.491 "uuid": "32ad3ce4-0614-442c-9e95-d8ba1023a1d4", 00:15:13.491 "strip_size_kb": 0, 00:15:13.491 "state": "online", 00:15:13.491 "raid_level": "raid1", 00:15:13.491 "superblock": true, 00:15:13.491 "num_base_bdevs": 2, 00:15:13.491 "num_base_bdevs_discovered": 2, 00:15:13.491 "num_base_bdevs_operational": 2, 00:15:13.491 "base_bdevs_list": [ 00:15:13.491 { 00:15:13.491 "name": "BaseBdev1", 00:15:13.491 "uuid": "60d1e523-f287-578c-bb82-8ea53fc9d25c", 00:15:13.491 "is_configured": true, 00:15:13.491 "data_offset": 2048, 00:15:13.491 "data_size": 63488 00:15:13.491 }, 00:15:13.491 { 00:15:13.491 "name": "BaseBdev2", 00:15:13.491 "uuid": "8a4a308b-5873-5ab4-ad7d-4fbaa9999258", 00:15:13.491 "is_configured": true, 00:15:13.491 "data_offset": 2048, 00:15:13.491 "data_size": 63488 00:15:13.491 } 00:15:13.491 ] 00:15:13.491 }' 00:15:13.491 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.491 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.059 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:14.059 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:14.059 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.059 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.059 [2024-11-27 04:32:10.426250] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:14.059 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.059 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:14.059 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.059 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.059 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.059 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:14.059 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.059 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:14.059 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:14.059 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:14.059 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:14.059 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.059 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.059 [2024-11-27 04:32:10.505789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:14.059 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.059 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:14.059 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.059 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.059 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:14.059 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:14.059 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:14.060 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.060 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.060 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.060 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.060 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.060 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.060 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.060 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.060 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.060 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.060 "name": "raid_bdev1", 00:15:14.060 "uuid": "32ad3ce4-0614-442c-9e95-d8ba1023a1d4", 00:15:14.060 "strip_size_kb": 0, 00:15:14.060 "state": "online", 00:15:14.060 "raid_level": "raid1", 00:15:14.060 "superblock": true, 00:15:14.060 "num_base_bdevs": 2, 00:15:14.060 "num_base_bdevs_discovered": 1, 00:15:14.060 "num_base_bdevs_operational": 1, 00:15:14.060 "base_bdevs_list": [ 00:15:14.060 { 00:15:14.060 "name": null, 00:15:14.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.060 "is_configured": false, 00:15:14.060 "data_offset": 0, 00:15:14.060 "data_size": 63488 00:15:14.060 }, 00:15:14.060 { 00:15:14.060 "name": "BaseBdev2", 00:15:14.060 "uuid": "8a4a308b-5873-5ab4-ad7d-4fbaa9999258", 00:15:14.060 "is_configured": true, 00:15:14.060 "data_offset": 2048, 00:15:14.060 "data_size": 63488 00:15:14.060 } 00:15:14.060 ] 00:15:14.060 }' 00:15:14.060 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.060 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.060 [2024-11-27 04:32:10.621812] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:14.060 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:14.060 Zero copy mechanism will not be used. 00:15:14.060 Running I/O for 60 seconds... 00:15:14.629 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:14.629 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.629 04:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.629 [2024-11-27 04:32:10.976752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:14.629 04:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.629 04:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:14.629 [2024-11-27 04:32:11.033093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:14.629 [2024-11-27 04:32:11.034982] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:14.629 [2024-11-27 04:32:11.160484] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:14.629 [2024-11-27 04:32:11.161090] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:14.888 [2024-11-27 04:32:11.276358] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:14.888 [2024-11-27 04:32:11.276719] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:15.146 [2024-11-27 04:32:11.530365] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:15.146 [2024-11-27 04:32:11.530843] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:15.405 184.00 IOPS, 552.00 MiB/s [2024-11-27T04:32:11.992Z] [2024-11-27 04:32:11.751256] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:15.664 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.664 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.664 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.664 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.664 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.664 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.664 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.664 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.664 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.664 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.664 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.664 "name": "raid_bdev1", 00:15:15.664 "uuid": "32ad3ce4-0614-442c-9e95-d8ba1023a1d4", 00:15:15.664 "strip_size_kb": 0, 00:15:15.664 "state": "online", 00:15:15.664 "raid_level": "raid1", 00:15:15.664 "superblock": true, 00:15:15.664 "num_base_bdevs": 2, 00:15:15.664 "num_base_bdevs_discovered": 2, 00:15:15.664 "num_base_bdevs_operational": 2, 00:15:15.664 "process": { 00:15:15.664 "type": "rebuild", 00:15:15.664 "target": "spare", 00:15:15.664 "progress": { 00:15:15.664 "blocks": 14336, 00:15:15.664 "percent": 22 00:15:15.664 } 00:15:15.664 }, 00:15:15.664 "base_bdevs_list": [ 00:15:15.664 { 00:15:15.664 "name": "spare", 00:15:15.664 "uuid": "1f8f5832-f725-5146-9340-5bed303909ac", 00:15:15.664 "is_configured": true, 00:15:15.664 "data_offset": 2048, 00:15:15.664 "data_size": 63488 00:15:15.664 }, 00:15:15.664 { 00:15:15.664 "name": "BaseBdev2", 00:15:15.664 "uuid": "8a4a308b-5873-5ab4-ad7d-4fbaa9999258", 00:15:15.664 "is_configured": true, 00:15:15.664 "data_offset": 2048, 00:15:15.664 "data_size": 63488 00:15:15.664 } 00:15:15.664 ] 00:15:15.664 }' 00:15:15.664 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.664 [2024-11-27 04:32:12.081889] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:15.664 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.664 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.664 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.664 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:15.664 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.664 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.664 [2024-11-27 04:32:12.139471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:15.664 [2024-11-27 04:32:12.200560] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:15.664 [2024-11-27 04:32:12.203738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.664 [2024-11-27 04:32:12.203779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:15.664 [2024-11-27 04:32:12.203793] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:15.924 [2024-11-27 04:32:12.259909] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:15:15.924 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.924 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:15.924 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.924 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.924 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:15.924 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:15.924 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:15.924 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.924 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.924 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.924 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.924 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.924 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.924 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.924 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.924 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.924 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.924 "name": "raid_bdev1", 00:15:15.924 "uuid": "32ad3ce4-0614-442c-9e95-d8ba1023a1d4", 00:15:15.924 "strip_size_kb": 0, 00:15:15.924 "state": "online", 00:15:15.924 "raid_level": "raid1", 00:15:15.924 "superblock": true, 00:15:15.924 "num_base_bdevs": 2, 00:15:15.924 "num_base_bdevs_discovered": 1, 00:15:15.924 "num_base_bdevs_operational": 1, 00:15:15.924 "base_bdevs_list": [ 00:15:15.924 { 00:15:15.924 "name": null, 00:15:15.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.924 "is_configured": false, 00:15:15.924 "data_offset": 0, 00:15:15.924 "data_size": 63488 00:15:15.924 }, 00:15:15.924 { 00:15:15.924 "name": "BaseBdev2", 00:15:15.924 "uuid": "8a4a308b-5873-5ab4-ad7d-4fbaa9999258", 00:15:15.924 "is_configured": true, 00:15:15.924 "data_offset": 2048, 00:15:15.924 "data_size": 63488 00:15:15.924 } 00:15:15.924 ] 00:15:15.924 }' 00:15:15.924 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.924 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.183 187.50 IOPS, 562.50 MiB/s [2024-11-27T04:32:12.770Z] 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:16.183 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.183 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:16.183 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:16.183 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.183 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.183 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.183 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.183 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.183 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.183 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.183 "name": "raid_bdev1", 00:15:16.183 "uuid": "32ad3ce4-0614-442c-9e95-d8ba1023a1d4", 00:15:16.183 "strip_size_kb": 0, 00:15:16.183 "state": "online", 00:15:16.183 "raid_level": "raid1", 00:15:16.183 "superblock": true, 00:15:16.183 "num_base_bdevs": 2, 00:15:16.183 "num_base_bdevs_discovered": 1, 00:15:16.183 "num_base_bdevs_operational": 1, 00:15:16.183 "base_bdevs_list": [ 00:15:16.183 { 00:15:16.183 "name": null, 00:15:16.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.183 "is_configured": false, 00:15:16.183 "data_offset": 0, 00:15:16.183 "data_size": 63488 00:15:16.183 }, 00:15:16.183 { 00:15:16.183 "name": "BaseBdev2", 00:15:16.183 "uuid": "8a4a308b-5873-5ab4-ad7d-4fbaa9999258", 00:15:16.183 "is_configured": true, 00:15:16.183 "data_offset": 2048, 00:15:16.183 "data_size": 63488 00:15:16.183 } 00:15:16.183 ] 00:15:16.183 }' 00:15:16.183 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.442 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:16.442 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.442 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:16.442 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:16.442 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.442 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.442 [2024-11-27 04:32:12.851365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:16.442 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.442 04:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:16.442 [2024-11-27 04:32:12.929431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:16.442 [2024-11-27 04:32:12.931356] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:16.702 [2024-11-27 04:32:13.055906] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:16.702 [2024-11-27 04:32:13.056538] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:16.702 [2024-11-27 04:32:13.267584] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:16.702 [2024-11-27 04:32:13.267882] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:17.269 [2024-11-27 04:32:13.610294] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:17.269 193.33 IOPS, 580.00 MiB/s [2024-11-27T04:32:13.856Z] [2024-11-27 04:32:13.731920] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:17.529 04:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.529 04:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.529 04:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.529 04:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.529 04:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.529 04:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.529 04:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.529 04:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.529 04:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.529 04:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.529 04:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.529 "name": "raid_bdev1", 00:15:17.529 "uuid": "32ad3ce4-0614-442c-9e95-d8ba1023a1d4", 00:15:17.529 "strip_size_kb": 0, 00:15:17.529 "state": "online", 00:15:17.529 "raid_level": "raid1", 00:15:17.529 "superblock": true, 00:15:17.529 "num_base_bdevs": 2, 00:15:17.529 "num_base_bdevs_discovered": 2, 00:15:17.529 "num_base_bdevs_operational": 2, 00:15:17.529 "process": { 00:15:17.529 "type": "rebuild", 00:15:17.529 "target": "spare", 00:15:17.529 "progress": { 00:15:17.529 "blocks": 10240, 00:15:17.529 "percent": 16 00:15:17.529 } 00:15:17.529 }, 00:15:17.529 "base_bdevs_list": [ 00:15:17.529 { 00:15:17.529 "name": "spare", 00:15:17.529 "uuid": "1f8f5832-f725-5146-9340-5bed303909ac", 00:15:17.529 "is_configured": true, 00:15:17.529 "data_offset": 2048, 00:15:17.529 "data_size": 63488 00:15:17.529 }, 00:15:17.529 { 00:15:17.529 "name": "BaseBdev2", 00:15:17.529 "uuid": "8a4a308b-5873-5ab4-ad7d-4fbaa9999258", 00:15:17.529 "is_configured": true, 00:15:17.529 "data_offset": 2048, 00:15:17.529 "data_size": 63488 00:15:17.529 } 00:15:17.529 ] 00:15:17.529 }' 00:15:17.529 04:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.529 04:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.529 04:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.529 04:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.529 04:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:17.529 04:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:17.529 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:17.529 04:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:17.529 04:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:17.529 04:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:17.529 04:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=442 00:15:17.529 04:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:17.529 04:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.529 04:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.529 04:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.529 04:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.529 04:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.529 04:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.529 04:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.529 04:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.529 04:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.529 04:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.529 04:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.529 "name": "raid_bdev1", 00:15:17.529 "uuid": "32ad3ce4-0614-442c-9e95-d8ba1023a1d4", 00:15:17.529 "strip_size_kb": 0, 00:15:17.529 "state": "online", 00:15:17.529 "raid_level": "raid1", 00:15:17.529 "superblock": true, 00:15:17.529 "num_base_bdevs": 2, 00:15:17.529 "num_base_bdevs_discovered": 2, 00:15:17.529 "num_base_bdevs_operational": 2, 00:15:17.529 "process": { 00:15:17.529 "type": "rebuild", 00:15:17.529 "target": "spare", 00:15:17.529 "progress": { 00:15:17.529 "blocks": 14336, 00:15:17.529 "percent": 22 00:15:17.529 } 00:15:17.529 }, 00:15:17.529 "base_bdevs_list": [ 00:15:17.529 { 00:15:17.529 "name": "spare", 00:15:17.529 "uuid": "1f8f5832-f725-5146-9340-5bed303909ac", 00:15:17.529 "is_configured": true, 00:15:17.529 "data_offset": 2048, 00:15:17.530 "data_size": 63488 00:15:17.530 }, 00:15:17.530 { 00:15:17.530 "name": "BaseBdev2", 00:15:17.530 "uuid": "8a4a308b-5873-5ab4-ad7d-4fbaa9999258", 00:15:17.530 "is_configured": true, 00:15:17.530 "data_offset": 2048, 00:15:17.530 "data_size": 63488 00:15:17.530 } 00:15:17.530 ] 00:15:17.530 }' 00:15:17.530 04:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.789 04:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.789 04:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.789 04:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.789 04:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:17.789 [2024-11-27 04:32:14.187977] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:18.047 [2024-11-27 04:32:14.506134] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:18.306 165.00 IOPS, 495.00 MiB/s [2024-11-27T04:32:14.893Z] [2024-11-27 04:32:14.841917] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:18.564 [2024-11-27 04:32:14.949861] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:18.823 [2024-11-27 04:32:15.165509] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:18.823 04:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:18.823 04:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.823 04:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.823 04:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.823 04:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.823 04:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.823 04:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.823 04:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.823 04:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.823 04:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.823 04:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.823 04:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.823 "name": "raid_bdev1", 00:15:18.823 "uuid": "32ad3ce4-0614-442c-9e95-d8ba1023a1d4", 00:15:18.823 "strip_size_kb": 0, 00:15:18.823 "state": "online", 00:15:18.823 "raid_level": "raid1", 00:15:18.823 "superblock": true, 00:15:18.823 "num_base_bdevs": 2, 00:15:18.823 "num_base_bdevs_discovered": 2, 00:15:18.823 "num_base_bdevs_operational": 2, 00:15:18.823 "process": { 00:15:18.823 "type": "rebuild", 00:15:18.823 "target": "spare", 00:15:18.823 "progress": { 00:15:18.823 "blocks": 32768, 00:15:18.823 "percent": 51 00:15:18.823 } 00:15:18.823 }, 00:15:18.823 "base_bdevs_list": [ 00:15:18.823 { 00:15:18.823 "name": "spare", 00:15:18.823 "uuid": "1f8f5832-f725-5146-9340-5bed303909ac", 00:15:18.823 "is_configured": true, 00:15:18.823 "data_offset": 2048, 00:15:18.823 "data_size": 63488 00:15:18.823 }, 00:15:18.823 { 00:15:18.823 "name": "BaseBdev2", 00:15:18.823 "uuid": "8a4a308b-5873-5ab4-ad7d-4fbaa9999258", 00:15:18.823 "is_configured": true, 00:15:18.823 "data_offset": 2048, 00:15:18.823 "data_size": 63488 00:15:18.823 } 00:15:18.823 ] 00:15:18.823 }' 00:15:18.823 04:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.823 [2024-11-27 04:32:15.279652] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:18.823 04:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.823 04:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.823 04:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.823 04:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:19.341 141.00 IOPS, 423.00 MiB/s [2024-11-27T04:32:15.928Z] [2024-11-27 04:32:15.720754] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:19.599 [2024-11-27 04:32:16.043275] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:19.858 [2024-11-27 04:32:16.246544] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:19.858 04:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:19.858 04:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.858 04:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.858 04:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.858 04:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.858 04:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.858 04:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.858 04:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.858 04:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.858 04:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.858 04:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.858 04:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.858 "name": "raid_bdev1", 00:15:19.858 "uuid": "32ad3ce4-0614-442c-9e95-d8ba1023a1d4", 00:15:19.858 "strip_size_kb": 0, 00:15:19.858 "state": "online", 00:15:19.858 "raid_level": "raid1", 00:15:19.858 "superblock": true, 00:15:19.858 "num_base_bdevs": 2, 00:15:19.858 "num_base_bdevs_discovered": 2, 00:15:19.858 "num_base_bdevs_operational": 2, 00:15:19.858 "process": { 00:15:19.858 "type": "rebuild", 00:15:19.858 "target": "spare", 00:15:19.858 "progress": { 00:15:19.858 "blocks": 47104, 00:15:19.858 "percent": 74 00:15:19.858 } 00:15:19.858 }, 00:15:19.858 "base_bdevs_list": [ 00:15:19.858 { 00:15:19.858 "name": "spare", 00:15:19.858 "uuid": "1f8f5832-f725-5146-9340-5bed303909ac", 00:15:19.858 "is_configured": true, 00:15:19.858 "data_offset": 2048, 00:15:19.858 "data_size": 63488 00:15:19.858 }, 00:15:19.858 { 00:15:19.858 "name": "BaseBdev2", 00:15:19.858 "uuid": "8a4a308b-5873-5ab4-ad7d-4fbaa9999258", 00:15:19.858 "is_configured": true, 00:15:19.858 "data_offset": 2048, 00:15:19.858 "data_size": 63488 00:15:19.858 } 00:15:19.858 ] 00:15:19.858 }' 00:15:19.858 04:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.858 04:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.858 04:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.117 04:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:20.117 04:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:20.117 [2024-11-27 04:32:16.477081] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:20.685 123.83 IOPS, 371.50 MiB/s [2024-11-27T04:32:17.272Z] [2024-11-27 04:32:17.252496] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:20.945 [2024-11-27 04:32:17.358092] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:20.945 [2024-11-27 04:32:17.361061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.945 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:20.945 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.945 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.945 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.945 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.945 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.945 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.945 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.945 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.945 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.945 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.945 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.945 "name": "raid_bdev1", 00:15:20.945 "uuid": "32ad3ce4-0614-442c-9e95-d8ba1023a1d4", 00:15:20.945 "strip_size_kb": 0, 00:15:20.945 "state": "online", 00:15:20.945 "raid_level": "raid1", 00:15:20.945 "superblock": true, 00:15:20.945 "num_base_bdevs": 2, 00:15:20.945 "num_base_bdevs_discovered": 2, 00:15:20.945 "num_base_bdevs_operational": 2, 00:15:20.945 "base_bdevs_list": [ 00:15:20.945 { 00:15:20.945 "name": "spare", 00:15:20.945 "uuid": "1f8f5832-f725-5146-9340-5bed303909ac", 00:15:20.945 "is_configured": true, 00:15:20.945 "data_offset": 2048, 00:15:20.945 "data_size": 63488 00:15:20.945 }, 00:15:20.945 { 00:15:20.945 "name": "BaseBdev2", 00:15:20.945 "uuid": "8a4a308b-5873-5ab4-ad7d-4fbaa9999258", 00:15:20.945 "is_configured": true, 00:15:20.945 "data_offset": 2048, 00:15:20.945 "data_size": 63488 00:15:20.945 } 00:15:20.945 ] 00:15:20.945 }' 00:15:20.945 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.204 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:21.204 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.204 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:21.204 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:21.204 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.205 112.00 IOPS, 336.00 MiB/s [2024-11-27T04:32:17.792Z] 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.205 "name": "raid_bdev1", 00:15:21.205 "uuid": "32ad3ce4-0614-442c-9e95-d8ba1023a1d4", 00:15:21.205 "strip_size_kb": 0, 00:15:21.205 "state": "online", 00:15:21.205 "raid_level": "raid1", 00:15:21.205 "superblock": true, 00:15:21.205 "num_base_bdevs": 2, 00:15:21.205 "num_base_bdevs_discovered": 2, 00:15:21.205 "num_base_bdevs_operational": 2, 00:15:21.205 "base_bdevs_list": [ 00:15:21.205 { 00:15:21.205 "name": "spare", 00:15:21.205 "uuid": "1f8f5832-f725-5146-9340-5bed303909ac", 00:15:21.205 "is_configured": true, 00:15:21.205 "data_offset": 2048, 00:15:21.205 "data_size": 63488 00:15:21.205 }, 00:15:21.205 { 00:15:21.205 "name": "BaseBdev2", 00:15:21.205 "uuid": "8a4a308b-5873-5ab4-ad7d-4fbaa9999258", 00:15:21.205 "is_configured": true, 00:15:21.205 "data_offset": 2048, 00:15:21.205 "data_size": 63488 00:15:21.205 } 00:15:21.205 ] 00:15:21.205 }' 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.205 "name": "raid_bdev1", 00:15:21.205 "uuid": "32ad3ce4-0614-442c-9e95-d8ba1023a1d4", 00:15:21.205 "strip_size_kb": 0, 00:15:21.205 "state": "online", 00:15:21.205 "raid_level": "raid1", 00:15:21.205 "superblock": true, 00:15:21.205 "num_base_bdevs": 2, 00:15:21.205 "num_base_bdevs_discovered": 2, 00:15:21.205 "num_base_bdevs_operational": 2, 00:15:21.205 "base_bdevs_list": [ 00:15:21.205 { 00:15:21.205 "name": "spare", 00:15:21.205 "uuid": "1f8f5832-f725-5146-9340-5bed303909ac", 00:15:21.205 "is_configured": true, 00:15:21.205 "data_offset": 2048, 00:15:21.205 "data_size": 63488 00:15:21.205 }, 00:15:21.205 { 00:15:21.205 "name": "BaseBdev2", 00:15:21.205 "uuid": "8a4a308b-5873-5ab4-ad7d-4fbaa9999258", 00:15:21.205 "is_configured": true, 00:15:21.205 "data_offset": 2048, 00:15:21.205 "data_size": 63488 00:15:21.205 } 00:15:21.205 ] 00:15:21.205 }' 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.205 04:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.775 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:21.775 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.775 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.775 [2024-11-27 04:32:18.098133] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:21.775 [2024-11-27 04:32:18.098169] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:21.775 00:15:21.775 Latency(us) 00:15:21.775 [2024-11-27T04:32:18.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.775 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:21.775 raid_bdev1 : 7.50 106.73 320.20 0.00 0.00 13248.62 305.86 116304.94 00:15:21.775 [2024-11-27T04:32:18.362Z] =================================================================================================================== 00:15:21.775 [2024-11-27T04:32:18.362Z] Total : 106.73 320.20 0.00 0.00 13248.62 305.86 116304.94 00:15:21.775 [2024-11-27 04:32:18.136181] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.775 [2024-11-27 04:32:18.136247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.775 [2024-11-27 04:32:18.136326] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.775 [2024-11-27 04:32:18.136336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:21.775 { 00:15:21.775 "results": [ 00:15:21.775 { 00:15:21.775 "job": "raid_bdev1", 00:15:21.775 "core_mask": "0x1", 00:15:21.775 "workload": "randrw", 00:15:21.775 "percentage": 50, 00:15:21.775 "status": "finished", 00:15:21.775 "queue_depth": 2, 00:15:21.775 "io_size": 3145728, 00:15:21.775 "runtime": 7.504596, 00:15:21.775 "iops": 106.73459304138424, 00:15:21.775 "mibps": 320.20377912415273, 00:15:21.775 "io_failed": 0, 00:15:21.775 "io_timeout": 0, 00:15:21.775 "avg_latency_us": 13248.62029013951, 00:15:21.775 "min_latency_us": 305.8585152838428, 00:15:21.775 "max_latency_us": 116304.93624454149 00:15:21.775 } 00:15:21.775 ], 00:15:21.775 "core_count": 1 00:15:21.775 } 00:15:21.775 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.775 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.775 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:21.775 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.775 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.775 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.775 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:21.775 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:21.775 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:21.775 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:21.775 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:21.775 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:21.775 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:21.775 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:21.775 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:21.775 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:21.775 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:21.775 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:21.775 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:22.035 /dev/nbd0 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:22.035 1+0 records in 00:15:22.035 1+0 records out 00:15:22.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449254 s, 9.1 MB/s 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:22.035 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:15:22.294 /dev/nbd1 00:15:22.294 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:22.294 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:22.294 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:22.294 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:22.294 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:22.294 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:22.294 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:22.294 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:22.294 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:22.294 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:22.294 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:22.294 1+0 records in 00:15:22.294 1+0 records out 00:15:22.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363669 s, 11.3 MB/s 00:15:22.294 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.294 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:22.294 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.294 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:22.294 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:22.294 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:22.294 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:22.294 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:22.553 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:22.553 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.553 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:22.553 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:22.553 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:22.553 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.553 04:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:22.553 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:22.553 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:22.553 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:22.553 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.553 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.553 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.811 [2024-11-27 04:32:19.376468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:22.811 [2024-11-27 04:32:19.376543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.811 [2024-11-27 04:32:19.376571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:22.811 [2024-11-27 04:32:19.376582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.811 [2024-11-27 04:32:19.379074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.811 [2024-11-27 04:32:19.379125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:22.811 [2024-11-27 04:32:19.379240] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:22.811 [2024-11-27 04:32:19.379294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:22.811 [2024-11-27 04:32:19.379448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:22.811 spare 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.811 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.070 [2024-11-27 04:32:19.479405] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:23.070 [2024-11-27 04:32:19.479476] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:23.070 [2024-11-27 04:32:19.479882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:15:23.070 [2024-11-27 04:32:19.480155] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:23.070 [2024-11-27 04:32:19.480175] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:23.071 [2024-11-27 04:32:19.480446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.071 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.071 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:23.071 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.071 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.071 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.071 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.071 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.071 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.071 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.071 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.071 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.071 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.071 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.071 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.071 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.071 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.071 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.071 "name": "raid_bdev1", 00:15:23.071 "uuid": "32ad3ce4-0614-442c-9e95-d8ba1023a1d4", 00:15:23.071 "strip_size_kb": 0, 00:15:23.071 "state": "online", 00:15:23.071 "raid_level": "raid1", 00:15:23.071 "superblock": true, 00:15:23.071 "num_base_bdevs": 2, 00:15:23.071 "num_base_bdevs_discovered": 2, 00:15:23.071 "num_base_bdevs_operational": 2, 00:15:23.071 "base_bdevs_list": [ 00:15:23.071 { 00:15:23.071 "name": "spare", 00:15:23.071 "uuid": "1f8f5832-f725-5146-9340-5bed303909ac", 00:15:23.071 "is_configured": true, 00:15:23.071 "data_offset": 2048, 00:15:23.071 "data_size": 63488 00:15:23.071 }, 00:15:23.071 { 00:15:23.071 "name": "BaseBdev2", 00:15:23.071 "uuid": "8a4a308b-5873-5ab4-ad7d-4fbaa9999258", 00:15:23.071 "is_configured": true, 00:15:23.071 "data_offset": 2048, 00:15:23.071 "data_size": 63488 00:15:23.071 } 00:15:23.071 ] 00:15:23.071 }' 00:15:23.071 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.071 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.641 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:23.641 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.641 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:23.641 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:23.642 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.642 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.642 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.642 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.642 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.642 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.642 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.642 "name": "raid_bdev1", 00:15:23.642 "uuid": "32ad3ce4-0614-442c-9e95-d8ba1023a1d4", 00:15:23.642 "strip_size_kb": 0, 00:15:23.642 "state": "online", 00:15:23.642 "raid_level": "raid1", 00:15:23.642 "superblock": true, 00:15:23.642 "num_base_bdevs": 2, 00:15:23.642 "num_base_bdevs_discovered": 2, 00:15:23.642 "num_base_bdevs_operational": 2, 00:15:23.642 "base_bdevs_list": [ 00:15:23.642 { 00:15:23.642 "name": "spare", 00:15:23.642 "uuid": "1f8f5832-f725-5146-9340-5bed303909ac", 00:15:23.642 "is_configured": true, 00:15:23.642 "data_offset": 2048, 00:15:23.642 "data_size": 63488 00:15:23.642 }, 00:15:23.642 { 00:15:23.642 "name": "BaseBdev2", 00:15:23.642 "uuid": "8a4a308b-5873-5ab4-ad7d-4fbaa9999258", 00:15:23.642 "is_configured": true, 00:15:23.642 "data_offset": 2048, 00:15:23.642 "data_size": 63488 00:15:23.642 } 00:15:23.642 ] 00:15:23.642 }' 00:15:23.642 04:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.642 [2024-11-27 04:32:20.091473] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.642 "name": "raid_bdev1", 00:15:23.642 "uuid": "32ad3ce4-0614-442c-9e95-d8ba1023a1d4", 00:15:23.642 "strip_size_kb": 0, 00:15:23.642 "state": "online", 00:15:23.642 "raid_level": "raid1", 00:15:23.642 "superblock": true, 00:15:23.642 "num_base_bdevs": 2, 00:15:23.642 "num_base_bdevs_discovered": 1, 00:15:23.642 "num_base_bdevs_operational": 1, 00:15:23.642 "base_bdevs_list": [ 00:15:23.642 { 00:15:23.642 "name": null, 00:15:23.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.642 "is_configured": false, 00:15:23.642 "data_offset": 0, 00:15:23.642 "data_size": 63488 00:15:23.642 }, 00:15:23.642 { 00:15:23.642 "name": "BaseBdev2", 00:15:23.642 "uuid": "8a4a308b-5873-5ab4-ad7d-4fbaa9999258", 00:15:23.642 "is_configured": true, 00:15:23.642 "data_offset": 2048, 00:15:23.642 "data_size": 63488 00:15:23.642 } 00:15:23.642 ] 00:15:23.642 }' 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.642 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.209 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:24.209 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.209 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.209 [2024-11-27 04:32:20.538769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.209 [2024-11-27 04:32:20.538995] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:24.209 [2024-11-27 04:32:20.539011] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:24.209 [2024-11-27 04:32:20.539054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.209 [2024-11-27 04:32:20.555679] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:15:24.209 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.209 04:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:24.209 [2024-11-27 04:32:20.557548] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:25.145 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.145 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.145 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.145 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.145 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.145 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.145 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.145 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.145 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.145 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.145 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.145 "name": "raid_bdev1", 00:15:25.145 "uuid": "32ad3ce4-0614-442c-9e95-d8ba1023a1d4", 00:15:25.145 "strip_size_kb": 0, 00:15:25.145 "state": "online", 00:15:25.145 "raid_level": "raid1", 00:15:25.145 "superblock": true, 00:15:25.145 "num_base_bdevs": 2, 00:15:25.145 "num_base_bdevs_discovered": 2, 00:15:25.145 "num_base_bdevs_operational": 2, 00:15:25.145 "process": { 00:15:25.145 "type": "rebuild", 00:15:25.145 "target": "spare", 00:15:25.145 "progress": { 00:15:25.145 "blocks": 20480, 00:15:25.145 "percent": 32 00:15:25.145 } 00:15:25.145 }, 00:15:25.145 "base_bdevs_list": [ 00:15:25.145 { 00:15:25.145 "name": "spare", 00:15:25.145 "uuid": "1f8f5832-f725-5146-9340-5bed303909ac", 00:15:25.145 "is_configured": true, 00:15:25.145 "data_offset": 2048, 00:15:25.145 "data_size": 63488 00:15:25.145 }, 00:15:25.145 { 00:15:25.145 "name": "BaseBdev2", 00:15:25.145 "uuid": "8a4a308b-5873-5ab4-ad7d-4fbaa9999258", 00:15:25.145 "is_configured": true, 00:15:25.145 "data_offset": 2048, 00:15:25.145 "data_size": 63488 00:15:25.145 } 00:15:25.145 ] 00:15:25.145 }' 00:15:25.145 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.145 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.145 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.145 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.145 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:25.145 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.145 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.145 [2024-11-27 04:32:21.713233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.403 [2024-11-27 04:32:21.763471] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:25.403 [2024-11-27 04:32:21.763623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.403 [2024-11-27 04:32:21.763639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.403 [2024-11-27 04:32:21.763649] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:25.403 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.403 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:25.403 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.403 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.403 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.403 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.403 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:25.403 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.403 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.403 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.403 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.403 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.403 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.403 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.403 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.403 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.403 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.403 "name": "raid_bdev1", 00:15:25.403 "uuid": "32ad3ce4-0614-442c-9e95-d8ba1023a1d4", 00:15:25.403 "strip_size_kb": 0, 00:15:25.403 "state": "online", 00:15:25.403 "raid_level": "raid1", 00:15:25.403 "superblock": true, 00:15:25.403 "num_base_bdevs": 2, 00:15:25.403 "num_base_bdevs_discovered": 1, 00:15:25.403 "num_base_bdevs_operational": 1, 00:15:25.403 "base_bdevs_list": [ 00:15:25.403 { 00:15:25.403 "name": null, 00:15:25.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.403 "is_configured": false, 00:15:25.403 "data_offset": 0, 00:15:25.403 "data_size": 63488 00:15:25.403 }, 00:15:25.403 { 00:15:25.403 "name": "BaseBdev2", 00:15:25.403 "uuid": "8a4a308b-5873-5ab4-ad7d-4fbaa9999258", 00:15:25.403 "is_configured": true, 00:15:25.403 "data_offset": 2048, 00:15:25.403 "data_size": 63488 00:15:25.403 } 00:15:25.403 ] 00:15:25.403 }' 00:15:25.403 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.403 04:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.971 04:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:25.971 04:32:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.971 04:32:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.971 [2024-11-27 04:32:22.258826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:25.971 [2024-11-27 04:32:22.258905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.971 [2024-11-27 04:32:22.258933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:25.971 [2024-11-27 04:32:22.258947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.971 [2024-11-27 04:32:22.259511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.971 [2024-11-27 04:32:22.259545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:25.971 [2024-11-27 04:32:22.259666] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:25.971 [2024-11-27 04:32:22.259689] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:25.971 [2024-11-27 04:32:22.259700] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:25.971 [2024-11-27 04:32:22.259724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:25.971 [2024-11-27 04:32:22.276878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:15:25.971 spare 00:15:25.971 04:32:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.971 04:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:25.971 [2024-11-27 04:32:22.278900] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:26.909 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.910 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.910 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.910 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.910 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.910 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.910 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.910 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.910 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.910 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.910 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.910 "name": "raid_bdev1", 00:15:26.910 "uuid": "32ad3ce4-0614-442c-9e95-d8ba1023a1d4", 00:15:26.910 "strip_size_kb": 0, 00:15:26.910 "state": "online", 00:15:26.910 "raid_level": "raid1", 00:15:26.910 "superblock": true, 00:15:26.910 "num_base_bdevs": 2, 00:15:26.910 "num_base_bdevs_discovered": 2, 00:15:26.910 "num_base_bdevs_operational": 2, 00:15:26.910 "process": { 00:15:26.910 "type": "rebuild", 00:15:26.910 "target": "spare", 00:15:26.910 "progress": { 00:15:26.910 "blocks": 20480, 00:15:26.910 "percent": 32 00:15:26.910 } 00:15:26.910 }, 00:15:26.910 "base_bdevs_list": [ 00:15:26.910 { 00:15:26.910 "name": "spare", 00:15:26.910 "uuid": "1f8f5832-f725-5146-9340-5bed303909ac", 00:15:26.910 "is_configured": true, 00:15:26.910 "data_offset": 2048, 00:15:26.910 "data_size": 63488 00:15:26.910 }, 00:15:26.910 { 00:15:26.910 "name": "BaseBdev2", 00:15:26.910 "uuid": "8a4a308b-5873-5ab4-ad7d-4fbaa9999258", 00:15:26.910 "is_configured": true, 00:15:26.910 "data_offset": 2048, 00:15:26.910 "data_size": 63488 00:15:26.910 } 00:15:26.910 ] 00:15:26.910 }' 00:15:26.910 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.910 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.910 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.910 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.910 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:26.910 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.910 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.910 [2024-11-27 04:32:23.414522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:26.910 [2024-11-27 04:32:23.484803] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:26.910 [2024-11-27 04:32:23.484870] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.910 [2024-11-27 04:32:23.484887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:26.910 [2024-11-27 04:32:23.484894] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:27.173 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.173 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:27.173 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.173 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.173 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.173 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.173 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:27.173 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.173 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.173 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.173 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.173 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.173 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.173 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.173 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.173 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.173 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.173 "name": "raid_bdev1", 00:15:27.173 "uuid": "32ad3ce4-0614-442c-9e95-d8ba1023a1d4", 00:15:27.173 "strip_size_kb": 0, 00:15:27.173 "state": "online", 00:15:27.173 "raid_level": "raid1", 00:15:27.173 "superblock": true, 00:15:27.173 "num_base_bdevs": 2, 00:15:27.173 "num_base_bdevs_discovered": 1, 00:15:27.173 "num_base_bdevs_operational": 1, 00:15:27.173 "base_bdevs_list": [ 00:15:27.173 { 00:15:27.173 "name": null, 00:15:27.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.173 "is_configured": false, 00:15:27.173 "data_offset": 0, 00:15:27.173 "data_size": 63488 00:15:27.173 }, 00:15:27.173 { 00:15:27.173 "name": "BaseBdev2", 00:15:27.173 "uuid": "8a4a308b-5873-5ab4-ad7d-4fbaa9999258", 00:15:27.173 "is_configured": true, 00:15:27.173 "data_offset": 2048, 00:15:27.173 "data_size": 63488 00:15:27.173 } 00:15:27.173 ] 00:15:27.173 }' 00:15:27.173 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.173 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.432 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:27.432 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.432 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:27.432 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:27.432 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.432 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.432 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.432 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.432 04:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.432 04:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.692 04:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.692 "name": "raid_bdev1", 00:15:27.692 "uuid": "32ad3ce4-0614-442c-9e95-d8ba1023a1d4", 00:15:27.692 "strip_size_kb": 0, 00:15:27.692 "state": "online", 00:15:27.692 "raid_level": "raid1", 00:15:27.692 "superblock": true, 00:15:27.692 "num_base_bdevs": 2, 00:15:27.692 "num_base_bdevs_discovered": 1, 00:15:27.692 "num_base_bdevs_operational": 1, 00:15:27.692 "base_bdevs_list": [ 00:15:27.692 { 00:15:27.692 "name": null, 00:15:27.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.692 "is_configured": false, 00:15:27.692 "data_offset": 0, 00:15:27.692 "data_size": 63488 00:15:27.692 }, 00:15:27.692 { 00:15:27.692 "name": "BaseBdev2", 00:15:27.692 "uuid": "8a4a308b-5873-5ab4-ad7d-4fbaa9999258", 00:15:27.692 "is_configured": true, 00:15:27.692 "data_offset": 2048, 00:15:27.692 "data_size": 63488 00:15:27.692 } 00:15:27.692 ] 00:15:27.692 }' 00:15:27.692 04:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.692 04:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:27.692 04:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.692 04:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:27.692 04:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:27.692 04:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.692 04:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.692 04:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.692 04:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:27.692 04:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.692 04:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.692 [2024-11-27 04:32:24.161379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:27.692 [2024-11-27 04:32:24.161457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.692 [2024-11-27 04:32:24.161485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:27.692 [2024-11-27 04:32:24.161497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.692 [2024-11-27 04:32:24.161959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.692 [2024-11-27 04:32:24.161985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:27.692 [2024-11-27 04:32:24.162071] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:27.692 [2024-11-27 04:32:24.162112] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:27.692 [2024-11-27 04:32:24.162124] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:27.692 [2024-11-27 04:32:24.162134] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:27.692 BaseBdev1 00:15:27.692 04:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.692 04:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:28.639 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:28.639 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.639 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.639 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.639 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.639 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:28.639 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.639 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.639 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.639 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.639 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.639 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.639 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.639 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.639 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.906 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.906 "name": "raid_bdev1", 00:15:28.906 "uuid": "32ad3ce4-0614-442c-9e95-d8ba1023a1d4", 00:15:28.906 "strip_size_kb": 0, 00:15:28.906 "state": "online", 00:15:28.906 "raid_level": "raid1", 00:15:28.906 "superblock": true, 00:15:28.906 "num_base_bdevs": 2, 00:15:28.906 "num_base_bdevs_discovered": 1, 00:15:28.906 "num_base_bdevs_operational": 1, 00:15:28.906 "base_bdevs_list": [ 00:15:28.906 { 00:15:28.906 "name": null, 00:15:28.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.906 "is_configured": false, 00:15:28.906 "data_offset": 0, 00:15:28.906 "data_size": 63488 00:15:28.906 }, 00:15:28.906 { 00:15:28.906 "name": "BaseBdev2", 00:15:28.906 "uuid": "8a4a308b-5873-5ab4-ad7d-4fbaa9999258", 00:15:28.906 "is_configured": true, 00:15:28.906 "data_offset": 2048, 00:15:28.906 "data_size": 63488 00:15:28.906 } 00:15:28.906 ] 00:15:28.906 }' 00:15:28.906 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.906 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.202 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:29.202 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.202 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:29.202 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:29.202 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.202 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.202 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.202 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.202 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.202 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.202 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.202 "name": "raid_bdev1", 00:15:29.202 "uuid": "32ad3ce4-0614-442c-9e95-d8ba1023a1d4", 00:15:29.202 "strip_size_kb": 0, 00:15:29.202 "state": "online", 00:15:29.202 "raid_level": "raid1", 00:15:29.202 "superblock": true, 00:15:29.202 "num_base_bdevs": 2, 00:15:29.202 "num_base_bdevs_discovered": 1, 00:15:29.202 "num_base_bdevs_operational": 1, 00:15:29.202 "base_bdevs_list": [ 00:15:29.202 { 00:15:29.202 "name": null, 00:15:29.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.202 "is_configured": false, 00:15:29.202 "data_offset": 0, 00:15:29.202 "data_size": 63488 00:15:29.202 }, 00:15:29.202 { 00:15:29.202 "name": "BaseBdev2", 00:15:29.202 "uuid": "8a4a308b-5873-5ab4-ad7d-4fbaa9999258", 00:15:29.202 "is_configured": true, 00:15:29.202 "data_offset": 2048, 00:15:29.202 "data_size": 63488 00:15:29.202 } 00:15:29.202 ] 00:15:29.202 }' 00:15:29.202 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.202 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:29.202 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.473 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:29.473 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:29.473 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:15:29.473 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:29.473 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:29.473 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:29.473 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:29.473 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:29.473 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:29.473 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.473 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.473 [2024-11-27 04:32:25.806970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.474 [2024-11-27 04:32:25.807155] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:29.474 [2024-11-27 04:32:25.807173] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:29.474 request: 00:15:29.474 { 00:15:29.474 "base_bdev": "BaseBdev1", 00:15:29.474 "raid_bdev": "raid_bdev1", 00:15:29.474 "method": "bdev_raid_add_base_bdev", 00:15:29.474 "req_id": 1 00:15:29.474 } 00:15:29.474 Got JSON-RPC error response 00:15:29.474 response: 00:15:29.474 { 00:15:29.474 "code": -22, 00:15:29.474 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:29.474 } 00:15:29.474 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:29.474 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:15:29.474 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:29.474 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:29.474 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:29.474 04:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:30.444 04:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:30.444 04:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.444 04:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.444 04:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.444 04:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.444 04:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:30.444 04:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.444 04:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.444 04:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.444 04:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.444 04:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.444 04:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.444 04:32:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.444 04:32:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.444 04:32:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.444 04:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.444 "name": "raid_bdev1", 00:15:30.444 "uuid": "32ad3ce4-0614-442c-9e95-d8ba1023a1d4", 00:15:30.444 "strip_size_kb": 0, 00:15:30.444 "state": "online", 00:15:30.444 "raid_level": "raid1", 00:15:30.444 "superblock": true, 00:15:30.444 "num_base_bdevs": 2, 00:15:30.444 "num_base_bdevs_discovered": 1, 00:15:30.444 "num_base_bdevs_operational": 1, 00:15:30.444 "base_bdevs_list": [ 00:15:30.444 { 00:15:30.444 "name": null, 00:15:30.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.444 "is_configured": false, 00:15:30.444 "data_offset": 0, 00:15:30.444 "data_size": 63488 00:15:30.444 }, 00:15:30.444 { 00:15:30.444 "name": "BaseBdev2", 00:15:30.444 "uuid": "8a4a308b-5873-5ab4-ad7d-4fbaa9999258", 00:15:30.444 "is_configured": true, 00:15:30.444 "data_offset": 2048, 00:15:30.444 "data_size": 63488 00:15:30.444 } 00:15:30.444 ] 00:15:30.444 }' 00:15:30.444 04:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.444 04:32:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.707 04:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:30.707 04:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.707 04:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:30.707 04:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:30.707 04:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.707 04:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.707 04:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.707 04:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.707 04:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.967 04:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.967 04:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.967 "name": "raid_bdev1", 00:15:30.967 "uuid": "32ad3ce4-0614-442c-9e95-d8ba1023a1d4", 00:15:30.967 "strip_size_kb": 0, 00:15:30.967 "state": "online", 00:15:30.967 "raid_level": "raid1", 00:15:30.967 "superblock": true, 00:15:30.967 "num_base_bdevs": 2, 00:15:30.967 "num_base_bdevs_discovered": 1, 00:15:30.967 "num_base_bdevs_operational": 1, 00:15:30.967 "base_bdevs_list": [ 00:15:30.967 { 00:15:30.967 "name": null, 00:15:30.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.967 "is_configured": false, 00:15:30.967 "data_offset": 0, 00:15:30.967 "data_size": 63488 00:15:30.967 }, 00:15:30.967 { 00:15:30.967 "name": "BaseBdev2", 00:15:30.967 "uuid": "8a4a308b-5873-5ab4-ad7d-4fbaa9999258", 00:15:30.967 "is_configured": true, 00:15:30.967 "data_offset": 2048, 00:15:30.967 "data_size": 63488 00:15:30.967 } 00:15:30.967 ] 00:15:30.967 }' 00:15:30.967 04:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.967 04:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:30.967 04:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.967 04:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:30.967 04:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77183 00:15:30.967 04:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77183 ']' 00:15:30.967 04:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77183 00:15:30.967 04:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:15:30.967 04:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:30.967 04:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77183 00:15:30.967 04:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:30.967 04:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:30.967 killing process with pid 77183 00:15:30.967 04:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77183' 00:15:30.967 04:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77183 00:15:30.967 Received shutdown signal, test time was about 16.849132 seconds 00:15:30.967 00:15:30.967 Latency(us) 00:15:30.967 [2024-11-27T04:32:27.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.967 [2024-11-27T04:32:27.554Z] =================================================================================================================== 00:15:30.967 [2024-11-27T04:32:27.554Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:30.967 [2024-11-27 04:32:27.440549] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:30.967 04:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77183 00:15:30.967 [2024-11-27 04:32:27.440698] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:30.967 [2024-11-27 04:32:27.440765] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:30.967 [2024-11-27 04:32:27.440779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:31.226 [2024-11-27 04:32:27.679764] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:32.617 04:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:32.617 00:15:32.617 real 0m20.094s 00:15:32.617 user 0m26.275s 00:15:32.617 sys 0m2.202s 00:15:32.617 04:32:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.618 ************************************ 00:15:32.618 END TEST raid_rebuild_test_sb_io 00:15:32.618 ************************************ 00:15:32.618 04:32:28 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:15:32.618 04:32:28 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:15:32.618 04:32:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:32.618 04:32:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.618 04:32:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:32.618 ************************************ 00:15:32.618 START TEST raid_rebuild_test 00:15:32.618 ************************************ 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:32.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77874 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77874 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77874 ']' 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.618 04:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.618 [2024-11-27 04:32:29.036339] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:15:32.618 [2024-11-27 04:32:29.036549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77874 ] 00:15:32.618 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:32.618 Zero copy mechanism will not be used. 00:15:32.877 [2024-11-27 04:32:29.210551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.877 [2024-11-27 04:32:29.325753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.136 [2024-11-27 04:32:29.536446] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.136 [2024-11-27 04:32:29.536590] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.396 04:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.396 04:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:33.396 04:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:33.396 04:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:33.396 04:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.396 04:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.396 BaseBdev1_malloc 00:15:33.396 04:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.396 04:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:33.396 04:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.396 04:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.396 [2024-11-27 04:32:29.936557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:33.396 [2024-11-27 04:32:29.936662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.396 [2024-11-27 04:32:29.936690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:33.396 [2024-11-27 04:32:29.936705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.396 [2024-11-27 04:32:29.939009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.396 [2024-11-27 04:32:29.939049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:33.396 BaseBdev1 00:15:33.396 04:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.396 04:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:33.396 04:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:33.396 04:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.396 04:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.656 BaseBdev2_malloc 00:15:33.656 04:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.656 04:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:33.656 04:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.656 04:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.656 [2024-11-27 04:32:29.991670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:33.656 [2024-11-27 04:32:29.991780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.656 [2024-11-27 04:32:29.991810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:33.656 [2024-11-27 04:32:29.991821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.656 [2024-11-27 04:32:29.994226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.656 [2024-11-27 04:32:29.994263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:33.656 BaseBdev2 00:15:33.656 04:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.656 04:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:33.656 04:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:33.656 04:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.656 04:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.656 BaseBdev3_malloc 00:15:33.656 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.656 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:33.656 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.656 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.656 [2024-11-27 04:32:30.055254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:33.656 [2024-11-27 04:32:30.055318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.656 [2024-11-27 04:32:30.055340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:33.657 [2024-11-27 04:32:30.055352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.657 [2024-11-27 04:32:30.057581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.657 [2024-11-27 04:32:30.057637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:33.657 BaseBdev3 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.657 BaseBdev4_malloc 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.657 [2024-11-27 04:32:30.103758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:33.657 [2024-11-27 04:32:30.103824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.657 [2024-11-27 04:32:30.103845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:33.657 [2024-11-27 04:32:30.103856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.657 [2024-11-27 04:32:30.105905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.657 [2024-11-27 04:32:30.105948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:33.657 BaseBdev4 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.657 spare_malloc 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.657 spare_delay 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.657 [2024-11-27 04:32:30.166421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:33.657 [2024-11-27 04:32:30.166538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.657 [2024-11-27 04:32:30.166577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:33.657 [2024-11-27 04:32:30.166612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.657 [2024-11-27 04:32:30.169048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.657 [2024-11-27 04:32:30.169138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:33.657 spare 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.657 [2024-11-27 04:32:30.178473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:33.657 [2024-11-27 04:32:30.180455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:33.657 [2024-11-27 04:32:30.180585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:33.657 [2024-11-27 04:32:30.180675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:33.657 [2024-11-27 04:32:30.180861] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:33.657 [2024-11-27 04:32:30.180914] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:33.657 [2024-11-27 04:32:30.181298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:33.657 [2024-11-27 04:32:30.181571] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:33.657 [2024-11-27 04:32:30.181624] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:33.657 [2024-11-27 04:32:30.181876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.657 "name": "raid_bdev1", 00:15:33.657 "uuid": "27a5edba-a880-4623-aa68-c9e25273dae1", 00:15:33.657 "strip_size_kb": 0, 00:15:33.657 "state": "online", 00:15:33.657 "raid_level": "raid1", 00:15:33.657 "superblock": false, 00:15:33.657 "num_base_bdevs": 4, 00:15:33.657 "num_base_bdevs_discovered": 4, 00:15:33.657 "num_base_bdevs_operational": 4, 00:15:33.657 "base_bdevs_list": [ 00:15:33.657 { 00:15:33.657 "name": "BaseBdev1", 00:15:33.657 "uuid": "cd162897-867d-5053-8bd9-7e79b84dc9a2", 00:15:33.657 "is_configured": true, 00:15:33.657 "data_offset": 0, 00:15:33.657 "data_size": 65536 00:15:33.657 }, 00:15:33.657 { 00:15:33.657 "name": "BaseBdev2", 00:15:33.657 "uuid": "fd36f9ea-b14b-55d1-bcfd-02408588814c", 00:15:33.657 "is_configured": true, 00:15:33.657 "data_offset": 0, 00:15:33.657 "data_size": 65536 00:15:33.657 }, 00:15:33.657 { 00:15:33.657 "name": "BaseBdev3", 00:15:33.657 "uuid": "5cb2cdde-6de9-5dfc-8b89-fa875b8000a2", 00:15:33.657 "is_configured": true, 00:15:33.657 "data_offset": 0, 00:15:33.657 "data_size": 65536 00:15:33.657 }, 00:15:33.657 { 00:15:33.657 "name": "BaseBdev4", 00:15:33.657 "uuid": "f99be0f6-6394-5a00-a452-430abeb29c44", 00:15:33.657 "is_configured": true, 00:15:33.657 "data_offset": 0, 00:15:33.657 "data_size": 65536 00:15:33.657 } 00:15:33.657 ] 00:15:33.657 }' 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.657 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.227 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:34.227 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.227 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.227 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:34.227 [2024-11-27 04:32:30.614047] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.227 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.227 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:34.227 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.227 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.227 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:34.227 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.227 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.227 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:34.227 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:34.227 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:34.227 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:34.227 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:34.227 04:32:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:34.227 04:32:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:34.227 04:32:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:34.227 04:32:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:34.227 04:32:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:34.227 04:32:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:34.227 04:32:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:34.227 04:32:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:34.227 04:32:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:34.492 [2024-11-27 04:32:30.901269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:34.492 /dev/nbd0 00:15:34.492 04:32:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:34.492 04:32:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:34.492 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:34.492 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:34.492 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:34.492 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:34.492 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:34.492 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:34.492 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:34.492 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:34.492 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.492 1+0 records in 00:15:34.492 1+0 records out 00:15:34.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399508 s, 10.3 MB/s 00:15:34.492 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.492 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:34.492 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.492 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:34.492 04:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:34.492 04:32:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:34.492 04:32:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:34.492 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:34.492 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:34.492 04:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:15:41.069 65536+0 records in 00:15:41.069 65536+0 records out 00:15:41.069 33554432 bytes (34 MB, 32 MiB) copied, 5.68745 s, 5.9 MB/s 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:41.069 [2024-11-27 04:32:36.855770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.069 [2024-11-27 04:32:36.892739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.069 "name": "raid_bdev1", 00:15:41.069 "uuid": "27a5edba-a880-4623-aa68-c9e25273dae1", 00:15:41.069 "strip_size_kb": 0, 00:15:41.069 "state": "online", 00:15:41.069 "raid_level": "raid1", 00:15:41.069 "superblock": false, 00:15:41.069 "num_base_bdevs": 4, 00:15:41.069 "num_base_bdevs_discovered": 3, 00:15:41.069 "num_base_bdevs_operational": 3, 00:15:41.069 "base_bdevs_list": [ 00:15:41.069 { 00:15:41.069 "name": null, 00:15:41.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.069 "is_configured": false, 00:15:41.069 "data_offset": 0, 00:15:41.069 "data_size": 65536 00:15:41.069 }, 00:15:41.069 { 00:15:41.069 "name": "BaseBdev2", 00:15:41.069 "uuid": "fd36f9ea-b14b-55d1-bcfd-02408588814c", 00:15:41.069 "is_configured": true, 00:15:41.069 "data_offset": 0, 00:15:41.069 "data_size": 65536 00:15:41.069 }, 00:15:41.069 { 00:15:41.069 "name": "BaseBdev3", 00:15:41.069 "uuid": "5cb2cdde-6de9-5dfc-8b89-fa875b8000a2", 00:15:41.069 "is_configured": true, 00:15:41.069 "data_offset": 0, 00:15:41.069 "data_size": 65536 00:15:41.069 }, 00:15:41.069 { 00:15:41.069 "name": "BaseBdev4", 00:15:41.069 "uuid": "f99be0f6-6394-5a00-a452-430abeb29c44", 00:15:41.069 "is_configured": true, 00:15:41.069 "data_offset": 0, 00:15:41.069 "data_size": 65536 00:15:41.069 } 00:15:41.069 ] 00:15:41.069 }' 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.069 04:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.069 04:32:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:41.070 04:32:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.070 04:32:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.070 [2024-11-27 04:32:37.335937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:41.070 [2024-11-27 04:32:37.350600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:15:41.070 04:32:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.070 04:32:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:41.070 [2024-11-27 04:32:37.352451] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.108 "name": "raid_bdev1", 00:15:42.108 "uuid": "27a5edba-a880-4623-aa68-c9e25273dae1", 00:15:42.108 "strip_size_kb": 0, 00:15:42.108 "state": "online", 00:15:42.108 "raid_level": "raid1", 00:15:42.108 "superblock": false, 00:15:42.108 "num_base_bdevs": 4, 00:15:42.108 "num_base_bdevs_discovered": 4, 00:15:42.108 "num_base_bdevs_operational": 4, 00:15:42.108 "process": { 00:15:42.108 "type": "rebuild", 00:15:42.108 "target": "spare", 00:15:42.108 "progress": { 00:15:42.108 "blocks": 20480, 00:15:42.108 "percent": 31 00:15:42.108 } 00:15:42.108 }, 00:15:42.108 "base_bdevs_list": [ 00:15:42.108 { 00:15:42.108 "name": "spare", 00:15:42.108 "uuid": "e262d2df-c98d-54de-8d9e-6a48244b1c61", 00:15:42.108 "is_configured": true, 00:15:42.108 "data_offset": 0, 00:15:42.108 "data_size": 65536 00:15:42.108 }, 00:15:42.108 { 00:15:42.108 "name": "BaseBdev2", 00:15:42.108 "uuid": "fd36f9ea-b14b-55d1-bcfd-02408588814c", 00:15:42.108 "is_configured": true, 00:15:42.108 "data_offset": 0, 00:15:42.108 "data_size": 65536 00:15:42.108 }, 00:15:42.108 { 00:15:42.108 "name": "BaseBdev3", 00:15:42.108 "uuid": "5cb2cdde-6de9-5dfc-8b89-fa875b8000a2", 00:15:42.108 "is_configured": true, 00:15:42.108 "data_offset": 0, 00:15:42.108 "data_size": 65536 00:15:42.108 }, 00:15:42.108 { 00:15:42.108 "name": "BaseBdev4", 00:15:42.108 "uuid": "f99be0f6-6394-5a00-a452-430abeb29c44", 00:15:42.108 "is_configured": true, 00:15:42.108 "data_offset": 0, 00:15:42.108 "data_size": 65536 00:15:42.108 } 00:15:42.108 ] 00:15:42.108 }' 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.108 [2024-11-27 04:32:38.511812] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.108 [2024-11-27 04:32:38.558117] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:42.108 [2024-11-27 04:32:38.558207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.108 [2024-11-27 04:32:38.558228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.108 [2024-11-27 04:32:38.558239] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.108 04:32:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.109 04:32:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.109 04:32:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.109 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.109 "name": "raid_bdev1", 00:15:42.109 "uuid": "27a5edba-a880-4623-aa68-c9e25273dae1", 00:15:42.109 "strip_size_kb": 0, 00:15:42.109 "state": "online", 00:15:42.109 "raid_level": "raid1", 00:15:42.109 "superblock": false, 00:15:42.109 "num_base_bdevs": 4, 00:15:42.109 "num_base_bdevs_discovered": 3, 00:15:42.109 "num_base_bdevs_operational": 3, 00:15:42.109 "base_bdevs_list": [ 00:15:42.109 { 00:15:42.109 "name": null, 00:15:42.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.109 "is_configured": false, 00:15:42.109 "data_offset": 0, 00:15:42.109 "data_size": 65536 00:15:42.109 }, 00:15:42.109 { 00:15:42.109 "name": "BaseBdev2", 00:15:42.109 "uuid": "fd36f9ea-b14b-55d1-bcfd-02408588814c", 00:15:42.109 "is_configured": true, 00:15:42.109 "data_offset": 0, 00:15:42.109 "data_size": 65536 00:15:42.109 }, 00:15:42.109 { 00:15:42.109 "name": "BaseBdev3", 00:15:42.109 "uuid": "5cb2cdde-6de9-5dfc-8b89-fa875b8000a2", 00:15:42.109 "is_configured": true, 00:15:42.109 "data_offset": 0, 00:15:42.109 "data_size": 65536 00:15:42.109 }, 00:15:42.109 { 00:15:42.109 "name": "BaseBdev4", 00:15:42.109 "uuid": "f99be0f6-6394-5a00-a452-430abeb29c44", 00:15:42.109 "is_configured": true, 00:15:42.109 "data_offset": 0, 00:15:42.109 "data_size": 65536 00:15:42.109 } 00:15:42.109 ] 00:15:42.109 }' 00:15:42.109 04:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.109 04:32:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.679 04:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:42.679 04:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.679 04:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:42.679 04:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:42.679 04:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.679 04:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.679 04:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.679 04:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.679 04:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.679 04:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.679 04:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.679 "name": "raid_bdev1", 00:15:42.679 "uuid": "27a5edba-a880-4623-aa68-c9e25273dae1", 00:15:42.679 "strip_size_kb": 0, 00:15:42.679 "state": "online", 00:15:42.679 "raid_level": "raid1", 00:15:42.679 "superblock": false, 00:15:42.679 "num_base_bdevs": 4, 00:15:42.679 "num_base_bdevs_discovered": 3, 00:15:42.679 "num_base_bdevs_operational": 3, 00:15:42.679 "base_bdevs_list": [ 00:15:42.679 { 00:15:42.679 "name": null, 00:15:42.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.679 "is_configured": false, 00:15:42.679 "data_offset": 0, 00:15:42.679 "data_size": 65536 00:15:42.679 }, 00:15:42.679 { 00:15:42.679 "name": "BaseBdev2", 00:15:42.679 "uuid": "fd36f9ea-b14b-55d1-bcfd-02408588814c", 00:15:42.679 "is_configured": true, 00:15:42.679 "data_offset": 0, 00:15:42.679 "data_size": 65536 00:15:42.679 }, 00:15:42.679 { 00:15:42.679 "name": "BaseBdev3", 00:15:42.679 "uuid": "5cb2cdde-6de9-5dfc-8b89-fa875b8000a2", 00:15:42.679 "is_configured": true, 00:15:42.679 "data_offset": 0, 00:15:42.679 "data_size": 65536 00:15:42.679 }, 00:15:42.679 { 00:15:42.679 "name": "BaseBdev4", 00:15:42.679 "uuid": "f99be0f6-6394-5a00-a452-430abeb29c44", 00:15:42.679 "is_configured": true, 00:15:42.679 "data_offset": 0, 00:15:42.679 "data_size": 65536 00:15:42.679 } 00:15:42.679 ] 00:15:42.679 }' 00:15:42.679 04:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.679 04:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:42.679 04:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.679 04:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:42.679 04:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:42.679 04:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.679 04:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.679 [2024-11-27 04:32:39.178428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:42.679 [2024-11-27 04:32:39.193937] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:15:42.679 04:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.679 04:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:42.679 [2024-11-27 04:32:39.196120] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:43.618 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.618 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.618 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.618 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.618 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.878 "name": "raid_bdev1", 00:15:43.878 "uuid": "27a5edba-a880-4623-aa68-c9e25273dae1", 00:15:43.878 "strip_size_kb": 0, 00:15:43.878 "state": "online", 00:15:43.878 "raid_level": "raid1", 00:15:43.878 "superblock": false, 00:15:43.878 "num_base_bdevs": 4, 00:15:43.878 "num_base_bdevs_discovered": 4, 00:15:43.878 "num_base_bdevs_operational": 4, 00:15:43.878 "process": { 00:15:43.878 "type": "rebuild", 00:15:43.878 "target": "spare", 00:15:43.878 "progress": { 00:15:43.878 "blocks": 20480, 00:15:43.878 "percent": 31 00:15:43.878 } 00:15:43.878 }, 00:15:43.878 "base_bdevs_list": [ 00:15:43.878 { 00:15:43.878 "name": "spare", 00:15:43.878 "uuid": "e262d2df-c98d-54de-8d9e-6a48244b1c61", 00:15:43.878 "is_configured": true, 00:15:43.878 "data_offset": 0, 00:15:43.878 "data_size": 65536 00:15:43.878 }, 00:15:43.878 { 00:15:43.878 "name": "BaseBdev2", 00:15:43.878 "uuid": "fd36f9ea-b14b-55d1-bcfd-02408588814c", 00:15:43.878 "is_configured": true, 00:15:43.878 "data_offset": 0, 00:15:43.878 "data_size": 65536 00:15:43.878 }, 00:15:43.878 { 00:15:43.878 "name": "BaseBdev3", 00:15:43.878 "uuid": "5cb2cdde-6de9-5dfc-8b89-fa875b8000a2", 00:15:43.878 "is_configured": true, 00:15:43.878 "data_offset": 0, 00:15:43.878 "data_size": 65536 00:15:43.878 }, 00:15:43.878 { 00:15:43.878 "name": "BaseBdev4", 00:15:43.878 "uuid": "f99be0f6-6394-5a00-a452-430abeb29c44", 00:15:43.878 "is_configured": true, 00:15:43.878 "data_offset": 0, 00:15:43.878 "data_size": 65536 00:15:43.878 } 00:15:43.878 ] 00:15:43.878 }' 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.878 [2024-11-27 04:32:40.355561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:43.878 [2024-11-27 04:32:40.401859] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.878 "name": "raid_bdev1", 00:15:43.878 "uuid": "27a5edba-a880-4623-aa68-c9e25273dae1", 00:15:43.878 "strip_size_kb": 0, 00:15:43.878 "state": "online", 00:15:43.878 "raid_level": "raid1", 00:15:43.878 "superblock": false, 00:15:43.878 "num_base_bdevs": 4, 00:15:43.878 "num_base_bdevs_discovered": 3, 00:15:43.878 "num_base_bdevs_operational": 3, 00:15:43.878 "process": { 00:15:43.878 "type": "rebuild", 00:15:43.878 "target": "spare", 00:15:43.878 "progress": { 00:15:43.878 "blocks": 24576, 00:15:43.878 "percent": 37 00:15:43.878 } 00:15:43.878 }, 00:15:43.878 "base_bdevs_list": [ 00:15:43.878 { 00:15:43.878 "name": "spare", 00:15:43.878 "uuid": "e262d2df-c98d-54de-8d9e-6a48244b1c61", 00:15:43.878 "is_configured": true, 00:15:43.878 "data_offset": 0, 00:15:43.878 "data_size": 65536 00:15:43.878 }, 00:15:43.878 { 00:15:43.878 "name": null, 00:15:43.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.878 "is_configured": false, 00:15:43.878 "data_offset": 0, 00:15:43.878 "data_size": 65536 00:15:43.878 }, 00:15:43.878 { 00:15:43.878 "name": "BaseBdev3", 00:15:43.878 "uuid": "5cb2cdde-6de9-5dfc-8b89-fa875b8000a2", 00:15:43.878 "is_configured": true, 00:15:43.878 "data_offset": 0, 00:15:43.878 "data_size": 65536 00:15:43.878 }, 00:15:43.878 { 00:15:43.878 "name": "BaseBdev4", 00:15:43.878 "uuid": "f99be0f6-6394-5a00-a452-430abeb29c44", 00:15:43.878 "is_configured": true, 00:15:43.878 "data_offset": 0, 00:15:43.878 "data_size": 65536 00:15:43.878 } 00:15:43.878 ] 00:15:43.878 }' 00:15:43.878 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.137 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.137 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.137 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.137 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=468 00:15:44.137 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:44.137 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.137 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.137 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.137 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.137 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.137 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.137 04:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.137 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.137 04:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.137 04:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.137 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.137 "name": "raid_bdev1", 00:15:44.137 "uuid": "27a5edba-a880-4623-aa68-c9e25273dae1", 00:15:44.137 "strip_size_kb": 0, 00:15:44.137 "state": "online", 00:15:44.137 "raid_level": "raid1", 00:15:44.137 "superblock": false, 00:15:44.137 "num_base_bdevs": 4, 00:15:44.137 "num_base_bdevs_discovered": 3, 00:15:44.137 "num_base_bdevs_operational": 3, 00:15:44.137 "process": { 00:15:44.137 "type": "rebuild", 00:15:44.137 "target": "spare", 00:15:44.137 "progress": { 00:15:44.137 "blocks": 26624, 00:15:44.137 "percent": 40 00:15:44.137 } 00:15:44.137 }, 00:15:44.137 "base_bdevs_list": [ 00:15:44.137 { 00:15:44.137 "name": "spare", 00:15:44.137 "uuid": "e262d2df-c98d-54de-8d9e-6a48244b1c61", 00:15:44.137 "is_configured": true, 00:15:44.137 "data_offset": 0, 00:15:44.137 "data_size": 65536 00:15:44.137 }, 00:15:44.137 { 00:15:44.137 "name": null, 00:15:44.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.137 "is_configured": false, 00:15:44.137 "data_offset": 0, 00:15:44.137 "data_size": 65536 00:15:44.137 }, 00:15:44.137 { 00:15:44.138 "name": "BaseBdev3", 00:15:44.138 "uuid": "5cb2cdde-6de9-5dfc-8b89-fa875b8000a2", 00:15:44.138 "is_configured": true, 00:15:44.138 "data_offset": 0, 00:15:44.138 "data_size": 65536 00:15:44.138 }, 00:15:44.138 { 00:15:44.138 "name": "BaseBdev4", 00:15:44.138 "uuid": "f99be0f6-6394-5a00-a452-430abeb29c44", 00:15:44.138 "is_configured": true, 00:15:44.138 "data_offset": 0, 00:15:44.138 "data_size": 65536 00:15:44.138 } 00:15:44.138 ] 00:15:44.138 }' 00:15:44.138 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.138 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.138 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.138 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.138 04:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:45.516 04:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:45.516 04:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.516 04:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.516 04:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.516 04:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.516 04:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.516 04:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.516 04:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.516 04:32:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.516 04:32:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.516 04:32:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.516 04:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.516 "name": "raid_bdev1", 00:15:45.516 "uuid": "27a5edba-a880-4623-aa68-c9e25273dae1", 00:15:45.516 "strip_size_kb": 0, 00:15:45.516 "state": "online", 00:15:45.516 "raid_level": "raid1", 00:15:45.516 "superblock": false, 00:15:45.516 "num_base_bdevs": 4, 00:15:45.516 "num_base_bdevs_discovered": 3, 00:15:45.516 "num_base_bdevs_operational": 3, 00:15:45.516 "process": { 00:15:45.516 "type": "rebuild", 00:15:45.516 "target": "spare", 00:15:45.516 "progress": { 00:15:45.516 "blocks": 51200, 00:15:45.516 "percent": 78 00:15:45.516 } 00:15:45.516 }, 00:15:45.516 "base_bdevs_list": [ 00:15:45.516 { 00:15:45.516 "name": "spare", 00:15:45.516 "uuid": "e262d2df-c98d-54de-8d9e-6a48244b1c61", 00:15:45.516 "is_configured": true, 00:15:45.516 "data_offset": 0, 00:15:45.516 "data_size": 65536 00:15:45.516 }, 00:15:45.516 { 00:15:45.516 "name": null, 00:15:45.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.516 "is_configured": false, 00:15:45.516 "data_offset": 0, 00:15:45.516 "data_size": 65536 00:15:45.516 }, 00:15:45.516 { 00:15:45.516 "name": "BaseBdev3", 00:15:45.516 "uuid": "5cb2cdde-6de9-5dfc-8b89-fa875b8000a2", 00:15:45.516 "is_configured": true, 00:15:45.516 "data_offset": 0, 00:15:45.516 "data_size": 65536 00:15:45.516 }, 00:15:45.516 { 00:15:45.516 "name": "BaseBdev4", 00:15:45.516 "uuid": "f99be0f6-6394-5a00-a452-430abeb29c44", 00:15:45.516 "is_configured": true, 00:15:45.516 "data_offset": 0, 00:15:45.516 "data_size": 65536 00:15:45.516 } 00:15:45.516 ] 00:15:45.517 }' 00:15:45.517 04:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.517 04:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.517 04:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.517 04:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.517 04:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:46.084 [2024-11-27 04:32:42.411414] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:46.084 [2024-11-27 04:32:42.411616] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:46.084 [2024-11-27 04:32:42.411702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.344 04:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:46.344 04:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.344 04:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.344 04:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.344 04:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.344 04:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.344 04:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.344 04:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.344 04:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.344 04:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.344 04:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.344 04:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.344 "name": "raid_bdev1", 00:15:46.344 "uuid": "27a5edba-a880-4623-aa68-c9e25273dae1", 00:15:46.344 "strip_size_kb": 0, 00:15:46.344 "state": "online", 00:15:46.344 "raid_level": "raid1", 00:15:46.344 "superblock": false, 00:15:46.344 "num_base_bdevs": 4, 00:15:46.344 "num_base_bdevs_discovered": 3, 00:15:46.344 "num_base_bdevs_operational": 3, 00:15:46.344 "base_bdevs_list": [ 00:15:46.344 { 00:15:46.344 "name": "spare", 00:15:46.344 "uuid": "e262d2df-c98d-54de-8d9e-6a48244b1c61", 00:15:46.344 "is_configured": true, 00:15:46.344 "data_offset": 0, 00:15:46.344 "data_size": 65536 00:15:46.344 }, 00:15:46.344 { 00:15:46.344 "name": null, 00:15:46.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.345 "is_configured": false, 00:15:46.345 "data_offset": 0, 00:15:46.345 "data_size": 65536 00:15:46.345 }, 00:15:46.345 { 00:15:46.345 "name": "BaseBdev3", 00:15:46.345 "uuid": "5cb2cdde-6de9-5dfc-8b89-fa875b8000a2", 00:15:46.345 "is_configured": true, 00:15:46.345 "data_offset": 0, 00:15:46.345 "data_size": 65536 00:15:46.345 }, 00:15:46.345 { 00:15:46.345 "name": "BaseBdev4", 00:15:46.345 "uuid": "f99be0f6-6394-5a00-a452-430abeb29c44", 00:15:46.345 "is_configured": true, 00:15:46.345 "data_offset": 0, 00:15:46.345 "data_size": 65536 00:15:46.345 } 00:15:46.345 ] 00:15:46.345 }' 00:15:46.345 04:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.605 04:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:46.605 04:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.605 04:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:46.605 04:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:46.605 04:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:46.605 04:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.605 04:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:46.605 04:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:46.605 04:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.605 04:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.605 04:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.605 04:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.605 04:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.605 04:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.605 04:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.605 "name": "raid_bdev1", 00:15:46.605 "uuid": "27a5edba-a880-4623-aa68-c9e25273dae1", 00:15:46.605 "strip_size_kb": 0, 00:15:46.605 "state": "online", 00:15:46.605 "raid_level": "raid1", 00:15:46.605 "superblock": false, 00:15:46.605 "num_base_bdevs": 4, 00:15:46.605 "num_base_bdevs_discovered": 3, 00:15:46.605 "num_base_bdevs_operational": 3, 00:15:46.605 "base_bdevs_list": [ 00:15:46.605 { 00:15:46.605 "name": "spare", 00:15:46.605 "uuid": "e262d2df-c98d-54de-8d9e-6a48244b1c61", 00:15:46.605 "is_configured": true, 00:15:46.605 "data_offset": 0, 00:15:46.605 "data_size": 65536 00:15:46.605 }, 00:15:46.605 { 00:15:46.605 "name": null, 00:15:46.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.605 "is_configured": false, 00:15:46.605 "data_offset": 0, 00:15:46.605 "data_size": 65536 00:15:46.605 }, 00:15:46.605 { 00:15:46.605 "name": "BaseBdev3", 00:15:46.605 "uuid": "5cb2cdde-6de9-5dfc-8b89-fa875b8000a2", 00:15:46.605 "is_configured": true, 00:15:46.605 "data_offset": 0, 00:15:46.605 "data_size": 65536 00:15:46.605 }, 00:15:46.605 { 00:15:46.605 "name": "BaseBdev4", 00:15:46.605 "uuid": "f99be0f6-6394-5a00-a452-430abeb29c44", 00:15:46.605 "is_configured": true, 00:15:46.605 "data_offset": 0, 00:15:46.605 "data_size": 65536 00:15:46.605 } 00:15:46.605 ] 00:15:46.605 }' 00:15:46.605 04:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.605 04:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:46.605 04:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.605 04:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:46.605 04:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:46.605 04:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.605 04:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.605 04:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.605 04:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.606 04:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.606 04:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.606 04:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.606 04:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.606 04:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.606 04:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.606 04:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.606 04:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.606 04:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.606 04:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.606 04:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.606 "name": "raid_bdev1", 00:15:46.606 "uuid": "27a5edba-a880-4623-aa68-c9e25273dae1", 00:15:46.606 "strip_size_kb": 0, 00:15:46.606 "state": "online", 00:15:46.606 "raid_level": "raid1", 00:15:46.606 "superblock": false, 00:15:46.606 "num_base_bdevs": 4, 00:15:46.606 "num_base_bdevs_discovered": 3, 00:15:46.606 "num_base_bdevs_operational": 3, 00:15:46.606 "base_bdevs_list": [ 00:15:46.606 { 00:15:46.606 "name": "spare", 00:15:46.606 "uuid": "e262d2df-c98d-54de-8d9e-6a48244b1c61", 00:15:46.606 "is_configured": true, 00:15:46.606 "data_offset": 0, 00:15:46.606 "data_size": 65536 00:15:46.606 }, 00:15:46.606 { 00:15:46.606 "name": null, 00:15:46.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.606 "is_configured": false, 00:15:46.606 "data_offset": 0, 00:15:46.606 "data_size": 65536 00:15:46.606 }, 00:15:46.606 { 00:15:46.606 "name": "BaseBdev3", 00:15:46.606 "uuid": "5cb2cdde-6de9-5dfc-8b89-fa875b8000a2", 00:15:46.606 "is_configured": true, 00:15:46.606 "data_offset": 0, 00:15:46.606 "data_size": 65536 00:15:46.606 }, 00:15:46.606 { 00:15:46.606 "name": "BaseBdev4", 00:15:46.606 "uuid": "f99be0f6-6394-5a00-a452-430abeb29c44", 00:15:46.606 "is_configured": true, 00:15:46.606 "data_offset": 0, 00:15:46.606 "data_size": 65536 00:15:46.606 } 00:15:46.606 ] 00:15:46.606 }' 00:15:46.606 04:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.606 04:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.175 04:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:47.175 04:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.175 04:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.175 [2024-11-27 04:32:43.568412] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:47.175 [2024-11-27 04:32:43.568449] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:47.175 [2024-11-27 04:32:43.568534] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.175 [2024-11-27 04:32:43.568618] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:47.175 [2024-11-27 04:32:43.568629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:47.175 04:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.175 04:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.175 04:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:47.175 04:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.175 04:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.175 04:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.175 04:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:47.175 04:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:47.175 04:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:47.175 04:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:47.175 04:32:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:47.175 04:32:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:47.175 04:32:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:47.175 04:32:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:47.175 04:32:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:47.175 04:32:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:47.175 04:32:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:47.175 04:32:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:47.175 04:32:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:47.435 /dev/nbd0 00:15:47.435 04:32:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:47.435 04:32:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:47.435 04:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:47.435 04:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:47.435 04:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:47.435 04:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:47.435 04:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:47.435 04:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:47.435 04:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:47.435 04:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:47.435 04:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.435 1+0 records in 00:15:47.435 1+0 records out 00:15:47.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567598 s, 7.2 MB/s 00:15:47.435 04:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.435 04:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:47.435 04:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.435 04:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:47.435 04:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:47.435 04:32:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:47.435 04:32:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:47.435 04:32:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:47.694 /dev/nbd1 00:15:47.694 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:47.694 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:47.694 04:32:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:47.694 04:32:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:47.694 04:32:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:47.694 04:32:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:47.694 04:32:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:47.694 04:32:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:47.694 04:32:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:47.694 04:32:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:47.694 04:32:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.695 1+0 records in 00:15:47.695 1+0 records out 00:15:47.695 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425974 s, 9.6 MB/s 00:15:47.695 04:32:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.695 04:32:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:47.695 04:32:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.695 04:32:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:47.695 04:32:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:47.695 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:47.695 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:47.695 04:32:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:47.954 04:32:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:47.954 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:47.954 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:47.954 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:47.954 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:47.954 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.954 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:47.954 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:47.954 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:47.954 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:47.954 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.954 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.954 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:47.954 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:47.954 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.954 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.954 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:48.213 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:48.213 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:48.213 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:48.213 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:48.213 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:48.214 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:48.214 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:48.214 04:32:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:48.214 04:32:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:48.214 04:32:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77874 00:15:48.214 04:32:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77874 ']' 00:15:48.214 04:32:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77874 00:15:48.214 04:32:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:48.214 04:32:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:48.214 04:32:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77874 00:15:48.214 04:32:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:48.214 04:32:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:48.214 04:32:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77874' 00:15:48.214 killing process with pid 77874 00:15:48.214 04:32:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77874 00:15:48.214 Received shutdown signal, test time was about 60.000000 seconds 00:15:48.214 00:15:48.214 Latency(us) 00:15:48.214 [2024-11-27T04:32:44.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.214 [2024-11-27T04:32:44.801Z] =================================================================================================================== 00:15:48.214 [2024-11-27T04:32:44.801Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:48.214 [2024-11-27 04:32:44.786764] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:48.214 04:32:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77874 00:15:48.842 [2024-11-27 04:32:45.290317] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:50.222 00:15:50.222 real 0m17.501s 00:15:50.222 user 0m19.555s 00:15:50.222 sys 0m3.101s 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.222 ************************************ 00:15:50.222 END TEST raid_rebuild_test 00:15:50.222 ************************************ 00:15:50.222 04:32:46 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:15:50.222 04:32:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:50.222 04:32:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.222 04:32:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:50.222 ************************************ 00:15:50.222 START TEST raid_rebuild_test_sb 00:15:50.222 ************************************ 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78317 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78317 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78317 ']' 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:50.222 04:32:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.222 [2024-11-27 04:32:46.607666] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:15:50.222 [2024-11-27 04:32:46.607884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78317 ] 00:15:50.222 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:50.222 Zero copy mechanism will not be used. 00:15:50.222 [2024-11-27 04:32:46.784977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.481 [2024-11-27 04:32:46.909344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.739 [2024-11-27 04:32:47.122621] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.739 [2024-11-27 04:32:47.122772] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.999 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:50.999 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:50.999 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:50.999 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:50.999 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.999 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.999 BaseBdev1_malloc 00:15:50.999 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.999 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:50.999 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.999 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.999 [2024-11-27 04:32:47.506650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:50.999 [2024-11-27 04:32:47.506714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.999 [2024-11-27 04:32:47.506739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:50.999 [2024-11-27 04:32:47.506750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.999 [2024-11-27 04:32:47.509056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.999 [2024-11-27 04:32:47.509110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:50.999 BaseBdev1 00:15:50.999 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.999 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:50.999 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:50.999 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.999 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.999 BaseBdev2_malloc 00:15:50.999 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.999 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:50.999 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.999 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.999 [2024-11-27 04:32:47.562160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:50.999 [2024-11-27 04:32:47.562226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.999 [2024-11-27 04:32:47.562252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:50.999 [2024-11-27 04:32:47.562263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.999 [2024-11-27 04:32:47.564491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.999 [2024-11-27 04:32:47.564607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:50.999 BaseBdev2 00:15:50.999 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.999 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:50.999 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:50.999 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.999 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.259 BaseBdev3_malloc 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.259 [2024-11-27 04:32:47.631596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:51.259 [2024-11-27 04:32:47.631666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.259 [2024-11-27 04:32:47.631696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:51.259 [2024-11-27 04:32:47.631709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.259 [2024-11-27 04:32:47.634037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.259 [2024-11-27 04:32:47.634078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:51.259 BaseBdev3 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.259 BaseBdev4_malloc 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.259 [2024-11-27 04:32:47.688746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:51.259 [2024-11-27 04:32:47.688809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.259 [2024-11-27 04:32:47.688832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:51.259 [2024-11-27 04:32:47.688843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.259 [2024-11-27 04:32:47.691073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.259 [2024-11-27 04:32:47.691122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:51.259 BaseBdev4 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.259 spare_malloc 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.259 spare_delay 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.259 [2024-11-27 04:32:47.757931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:51.259 [2024-11-27 04:32:47.757986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.259 [2024-11-27 04:32:47.758005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:51.259 [2024-11-27 04:32:47.758015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.259 [2024-11-27 04:32:47.760126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.259 [2024-11-27 04:32:47.760163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:51.259 spare 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.259 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.259 [2024-11-27 04:32:47.769953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:51.259 [2024-11-27 04:32:47.771739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.259 [2024-11-27 04:32:47.771890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:51.259 [2024-11-27 04:32:47.771957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:51.259 [2024-11-27 04:32:47.772176] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:51.259 [2024-11-27 04:32:47.772210] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:51.259 [2024-11-27 04:32:47.772524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:51.259 [2024-11-27 04:32:47.772715] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:51.259 [2024-11-27 04:32:47.772726] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:51.260 [2024-11-27 04:32:47.772883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.260 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.260 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:51.260 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.260 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.260 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.260 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.260 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.260 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.260 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.260 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.260 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.260 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.260 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.260 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.260 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.260 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.260 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.260 "name": "raid_bdev1", 00:15:51.260 "uuid": "f29ae053-2b4a-4749-989f-593cce0d751c", 00:15:51.260 "strip_size_kb": 0, 00:15:51.260 "state": "online", 00:15:51.260 "raid_level": "raid1", 00:15:51.260 "superblock": true, 00:15:51.260 "num_base_bdevs": 4, 00:15:51.260 "num_base_bdevs_discovered": 4, 00:15:51.260 "num_base_bdevs_operational": 4, 00:15:51.260 "base_bdevs_list": [ 00:15:51.260 { 00:15:51.260 "name": "BaseBdev1", 00:15:51.260 "uuid": "cba7f587-9b61-534c-946c-58ca43b3bb32", 00:15:51.260 "is_configured": true, 00:15:51.260 "data_offset": 2048, 00:15:51.260 "data_size": 63488 00:15:51.260 }, 00:15:51.260 { 00:15:51.260 "name": "BaseBdev2", 00:15:51.260 "uuid": "08d7b078-761a-5a71-837a-392ac13a02a6", 00:15:51.260 "is_configured": true, 00:15:51.260 "data_offset": 2048, 00:15:51.260 "data_size": 63488 00:15:51.260 }, 00:15:51.260 { 00:15:51.260 "name": "BaseBdev3", 00:15:51.260 "uuid": "5d0f54ba-b374-5b15-964b-d60dead2c35b", 00:15:51.260 "is_configured": true, 00:15:51.260 "data_offset": 2048, 00:15:51.260 "data_size": 63488 00:15:51.260 }, 00:15:51.260 { 00:15:51.260 "name": "BaseBdev4", 00:15:51.260 "uuid": "9df2f5d1-004e-5b8e-bed0-34cd18f51d6a", 00:15:51.260 "is_configured": true, 00:15:51.260 "data_offset": 2048, 00:15:51.260 "data_size": 63488 00:15:51.260 } 00:15:51.260 ] 00:15:51.260 }' 00:15:51.260 04:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.260 04:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.830 04:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:51.830 04:32:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.830 04:32:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.830 04:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:51.830 [2024-11-27 04:32:48.201650] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.830 04:32:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.830 04:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:51.830 04:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:51.830 04:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.830 04:32:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.830 04:32:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.830 04:32:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.830 04:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:51.830 04:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:51.830 04:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:51.830 04:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:51.830 04:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:51.830 04:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.830 04:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:51.830 04:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:51.830 04:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:51.830 04:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:51.830 04:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:51.830 04:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:51.830 04:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:51.830 04:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:52.092 [2024-11-27 04:32:48.496772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:52.092 /dev/nbd0 00:15:52.092 04:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:52.092 04:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:52.092 04:32:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:52.092 04:32:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:52.092 04:32:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:52.092 04:32:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:52.092 04:32:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:52.092 04:32:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:52.092 04:32:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:52.092 04:32:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:52.092 04:32:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:52.092 1+0 records in 00:15:52.092 1+0 records out 00:15:52.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392811 s, 10.4 MB/s 00:15:52.092 04:32:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.092 04:32:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:52.092 04:32:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.092 04:32:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:52.092 04:32:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:52.092 04:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:52.092 04:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:52.092 04:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:52.092 04:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:52.092 04:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:58.725 63488+0 records in 00:15:58.725 63488+0 records out 00:15:58.725 32505856 bytes (33 MB, 31 MiB) copied, 5.59349 s, 5.8 MB/s 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:58.725 [2024-11-27 04:32:54.384109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.725 [2024-11-27 04:32:54.400193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.725 "name": "raid_bdev1", 00:15:58.725 "uuid": "f29ae053-2b4a-4749-989f-593cce0d751c", 00:15:58.725 "strip_size_kb": 0, 00:15:58.725 "state": "online", 00:15:58.725 "raid_level": "raid1", 00:15:58.725 "superblock": true, 00:15:58.725 "num_base_bdevs": 4, 00:15:58.725 "num_base_bdevs_discovered": 3, 00:15:58.725 "num_base_bdevs_operational": 3, 00:15:58.725 "base_bdevs_list": [ 00:15:58.725 { 00:15:58.725 "name": null, 00:15:58.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.725 "is_configured": false, 00:15:58.725 "data_offset": 0, 00:15:58.725 "data_size": 63488 00:15:58.725 }, 00:15:58.725 { 00:15:58.725 "name": "BaseBdev2", 00:15:58.725 "uuid": "08d7b078-761a-5a71-837a-392ac13a02a6", 00:15:58.725 "is_configured": true, 00:15:58.725 "data_offset": 2048, 00:15:58.725 "data_size": 63488 00:15:58.725 }, 00:15:58.725 { 00:15:58.725 "name": "BaseBdev3", 00:15:58.725 "uuid": "5d0f54ba-b374-5b15-964b-d60dead2c35b", 00:15:58.725 "is_configured": true, 00:15:58.725 "data_offset": 2048, 00:15:58.725 "data_size": 63488 00:15:58.725 }, 00:15:58.725 { 00:15:58.725 "name": "BaseBdev4", 00:15:58.725 "uuid": "9df2f5d1-004e-5b8e-bed0-34cd18f51d6a", 00:15:58.725 "is_configured": true, 00:15:58.725 "data_offset": 2048, 00:15:58.725 "data_size": 63488 00:15:58.725 } 00:15:58.725 ] 00:15:58.725 }' 00:15:58.725 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.726 04:32:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.726 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:58.726 04:32:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.726 04:32:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.726 [2024-11-27 04:32:54.891358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:58.726 [2024-11-27 04:32:54.907928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:15:58.726 04:32:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.726 [2024-11-27 04:32:54.909940] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:58.726 04:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:59.663 04:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.663 04:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.663 04:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.663 04:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.663 04:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.663 04:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.663 04:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.663 04:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.663 04:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.663 04:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.663 04:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.663 "name": "raid_bdev1", 00:15:59.663 "uuid": "f29ae053-2b4a-4749-989f-593cce0d751c", 00:15:59.663 "strip_size_kb": 0, 00:15:59.663 "state": "online", 00:15:59.663 "raid_level": "raid1", 00:15:59.663 "superblock": true, 00:15:59.663 "num_base_bdevs": 4, 00:15:59.663 "num_base_bdevs_discovered": 4, 00:15:59.663 "num_base_bdevs_operational": 4, 00:15:59.663 "process": { 00:15:59.663 "type": "rebuild", 00:15:59.663 "target": "spare", 00:15:59.663 "progress": { 00:15:59.663 "blocks": 20480, 00:15:59.663 "percent": 32 00:15:59.663 } 00:15:59.663 }, 00:15:59.663 "base_bdevs_list": [ 00:15:59.663 { 00:15:59.663 "name": "spare", 00:15:59.663 "uuid": "8ce9c465-1fae-558a-b9df-035a6f460050", 00:15:59.663 "is_configured": true, 00:15:59.663 "data_offset": 2048, 00:15:59.663 "data_size": 63488 00:15:59.663 }, 00:15:59.663 { 00:15:59.663 "name": "BaseBdev2", 00:15:59.663 "uuid": "08d7b078-761a-5a71-837a-392ac13a02a6", 00:15:59.663 "is_configured": true, 00:15:59.663 "data_offset": 2048, 00:15:59.663 "data_size": 63488 00:15:59.663 }, 00:15:59.663 { 00:15:59.663 "name": "BaseBdev3", 00:15:59.663 "uuid": "5d0f54ba-b374-5b15-964b-d60dead2c35b", 00:15:59.663 "is_configured": true, 00:15:59.663 "data_offset": 2048, 00:15:59.663 "data_size": 63488 00:15:59.663 }, 00:15:59.663 { 00:15:59.663 "name": "BaseBdev4", 00:15:59.663 "uuid": "9df2f5d1-004e-5b8e-bed0-34cd18f51d6a", 00:15:59.663 "is_configured": true, 00:15:59.663 "data_offset": 2048, 00:15:59.663 "data_size": 63488 00:15:59.663 } 00:15:59.663 ] 00:15:59.663 }' 00:15:59.663 04:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.664 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.664 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.664 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.664 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:59.664 04:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.664 04:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.664 [2024-11-27 04:32:56.073468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:59.664 [2024-11-27 04:32:56.115763] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:59.664 [2024-11-27 04:32:56.115866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.664 [2024-11-27 04:32:56.115884] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:59.664 [2024-11-27 04:32:56.115894] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:59.664 04:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.664 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:59.664 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.664 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.664 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.664 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.664 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:59.664 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.664 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.664 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.664 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.664 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.664 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.664 04:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.664 04:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.664 04:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.664 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.664 "name": "raid_bdev1", 00:15:59.664 "uuid": "f29ae053-2b4a-4749-989f-593cce0d751c", 00:15:59.664 "strip_size_kb": 0, 00:15:59.664 "state": "online", 00:15:59.664 "raid_level": "raid1", 00:15:59.664 "superblock": true, 00:15:59.664 "num_base_bdevs": 4, 00:15:59.664 "num_base_bdevs_discovered": 3, 00:15:59.664 "num_base_bdevs_operational": 3, 00:15:59.664 "base_bdevs_list": [ 00:15:59.664 { 00:15:59.664 "name": null, 00:15:59.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.664 "is_configured": false, 00:15:59.664 "data_offset": 0, 00:15:59.664 "data_size": 63488 00:15:59.664 }, 00:15:59.664 { 00:15:59.664 "name": "BaseBdev2", 00:15:59.664 "uuid": "08d7b078-761a-5a71-837a-392ac13a02a6", 00:15:59.664 "is_configured": true, 00:15:59.664 "data_offset": 2048, 00:15:59.664 "data_size": 63488 00:15:59.664 }, 00:15:59.664 { 00:15:59.664 "name": "BaseBdev3", 00:15:59.664 "uuid": "5d0f54ba-b374-5b15-964b-d60dead2c35b", 00:15:59.664 "is_configured": true, 00:15:59.664 "data_offset": 2048, 00:15:59.664 "data_size": 63488 00:15:59.664 }, 00:15:59.664 { 00:15:59.664 "name": "BaseBdev4", 00:15:59.664 "uuid": "9df2f5d1-004e-5b8e-bed0-34cd18f51d6a", 00:15:59.664 "is_configured": true, 00:15:59.664 "data_offset": 2048, 00:15:59.664 "data_size": 63488 00:15:59.664 } 00:15:59.664 ] 00:15:59.664 }' 00:15:59.664 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.664 04:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.240 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:00.240 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.240 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:00.240 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:00.240 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.240 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.240 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.240 04:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.240 04:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.240 04:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.240 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.240 "name": "raid_bdev1", 00:16:00.240 "uuid": "f29ae053-2b4a-4749-989f-593cce0d751c", 00:16:00.240 "strip_size_kb": 0, 00:16:00.240 "state": "online", 00:16:00.240 "raid_level": "raid1", 00:16:00.240 "superblock": true, 00:16:00.240 "num_base_bdevs": 4, 00:16:00.240 "num_base_bdevs_discovered": 3, 00:16:00.240 "num_base_bdevs_operational": 3, 00:16:00.240 "base_bdevs_list": [ 00:16:00.240 { 00:16:00.240 "name": null, 00:16:00.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.240 "is_configured": false, 00:16:00.240 "data_offset": 0, 00:16:00.240 "data_size": 63488 00:16:00.240 }, 00:16:00.240 { 00:16:00.240 "name": "BaseBdev2", 00:16:00.240 "uuid": "08d7b078-761a-5a71-837a-392ac13a02a6", 00:16:00.240 "is_configured": true, 00:16:00.240 "data_offset": 2048, 00:16:00.240 "data_size": 63488 00:16:00.240 }, 00:16:00.240 { 00:16:00.240 "name": "BaseBdev3", 00:16:00.240 "uuid": "5d0f54ba-b374-5b15-964b-d60dead2c35b", 00:16:00.240 "is_configured": true, 00:16:00.240 "data_offset": 2048, 00:16:00.240 "data_size": 63488 00:16:00.240 }, 00:16:00.240 { 00:16:00.240 "name": "BaseBdev4", 00:16:00.240 "uuid": "9df2f5d1-004e-5b8e-bed0-34cd18f51d6a", 00:16:00.240 "is_configured": true, 00:16:00.240 "data_offset": 2048, 00:16:00.240 "data_size": 63488 00:16:00.240 } 00:16:00.240 ] 00:16:00.240 }' 00:16:00.240 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.240 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:00.240 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.240 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:00.240 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:00.240 04:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.240 04:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.240 [2024-11-27 04:32:56.766742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:00.240 [2024-11-27 04:32:56.784203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:16:00.240 04:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.240 04:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:00.240 [2024-11-27 04:32:56.786449] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:01.619 04:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.619 04:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.619 04:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.619 04:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.619 04:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.619 04:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.619 04:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.619 04:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.619 04:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.619 04:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.619 04:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.619 "name": "raid_bdev1", 00:16:01.619 "uuid": "f29ae053-2b4a-4749-989f-593cce0d751c", 00:16:01.619 "strip_size_kb": 0, 00:16:01.619 "state": "online", 00:16:01.619 "raid_level": "raid1", 00:16:01.619 "superblock": true, 00:16:01.619 "num_base_bdevs": 4, 00:16:01.619 "num_base_bdevs_discovered": 4, 00:16:01.619 "num_base_bdevs_operational": 4, 00:16:01.619 "process": { 00:16:01.619 "type": "rebuild", 00:16:01.619 "target": "spare", 00:16:01.619 "progress": { 00:16:01.619 "blocks": 20480, 00:16:01.619 "percent": 32 00:16:01.619 } 00:16:01.619 }, 00:16:01.619 "base_bdevs_list": [ 00:16:01.619 { 00:16:01.619 "name": "spare", 00:16:01.619 "uuid": "8ce9c465-1fae-558a-b9df-035a6f460050", 00:16:01.619 "is_configured": true, 00:16:01.619 "data_offset": 2048, 00:16:01.619 "data_size": 63488 00:16:01.619 }, 00:16:01.619 { 00:16:01.619 "name": "BaseBdev2", 00:16:01.619 "uuid": "08d7b078-761a-5a71-837a-392ac13a02a6", 00:16:01.619 "is_configured": true, 00:16:01.619 "data_offset": 2048, 00:16:01.619 "data_size": 63488 00:16:01.619 }, 00:16:01.619 { 00:16:01.619 "name": "BaseBdev3", 00:16:01.619 "uuid": "5d0f54ba-b374-5b15-964b-d60dead2c35b", 00:16:01.619 "is_configured": true, 00:16:01.619 "data_offset": 2048, 00:16:01.619 "data_size": 63488 00:16:01.619 }, 00:16:01.619 { 00:16:01.619 "name": "BaseBdev4", 00:16:01.619 "uuid": "9df2f5d1-004e-5b8e-bed0-34cd18f51d6a", 00:16:01.619 "is_configured": true, 00:16:01.619 "data_offset": 2048, 00:16:01.619 "data_size": 63488 00:16:01.619 } 00:16:01.619 ] 00:16:01.619 }' 00:16:01.619 04:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.619 04:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.619 04:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.619 04:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.619 04:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:01.619 04:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:01.619 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:01.619 04:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:01.619 04:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:01.619 04:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:01.619 04:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:01.619 04:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.619 04:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.619 [2024-11-27 04:32:57.933384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:01.619 [2024-11-27 04:32:58.092204] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:16:01.619 04:32:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.619 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:01.619 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:01.619 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.619 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.619 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.619 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.619 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.619 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.619 04:32:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.619 04:32:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.619 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.619 04:32:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.619 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.619 "name": "raid_bdev1", 00:16:01.619 "uuid": "f29ae053-2b4a-4749-989f-593cce0d751c", 00:16:01.619 "strip_size_kb": 0, 00:16:01.619 "state": "online", 00:16:01.619 "raid_level": "raid1", 00:16:01.619 "superblock": true, 00:16:01.619 "num_base_bdevs": 4, 00:16:01.619 "num_base_bdevs_discovered": 3, 00:16:01.619 "num_base_bdevs_operational": 3, 00:16:01.619 "process": { 00:16:01.619 "type": "rebuild", 00:16:01.619 "target": "spare", 00:16:01.619 "progress": { 00:16:01.619 "blocks": 24576, 00:16:01.619 "percent": 38 00:16:01.619 } 00:16:01.619 }, 00:16:01.619 "base_bdevs_list": [ 00:16:01.619 { 00:16:01.619 "name": "spare", 00:16:01.619 "uuid": "8ce9c465-1fae-558a-b9df-035a6f460050", 00:16:01.619 "is_configured": true, 00:16:01.619 "data_offset": 2048, 00:16:01.619 "data_size": 63488 00:16:01.619 }, 00:16:01.619 { 00:16:01.619 "name": null, 00:16:01.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.619 "is_configured": false, 00:16:01.619 "data_offset": 0, 00:16:01.619 "data_size": 63488 00:16:01.619 }, 00:16:01.619 { 00:16:01.619 "name": "BaseBdev3", 00:16:01.619 "uuid": "5d0f54ba-b374-5b15-964b-d60dead2c35b", 00:16:01.619 "is_configured": true, 00:16:01.619 "data_offset": 2048, 00:16:01.619 "data_size": 63488 00:16:01.619 }, 00:16:01.619 { 00:16:01.619 "name": "BaseBdev4", 00:16:01.619 "uuid": "9df2f5d1-004e-5b8e-bed0-34cd18f51d6a", 00:16:01.619 "is_configured": true, 00:16:01.619 "data_offset": 2048, 00:16:01.619 "data_size": 63488 00:16:01.619 } 00:16:01.619 ] 00:16:01.619 }' 00:16:01.619 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.619 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.619 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.879 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.879 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=486 00:16:01.879 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:01.879 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.879 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.879 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.879 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.879 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.879 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.879 04:32:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.879 04:32:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.879 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.879 04:32:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.879 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.879 "name": "raid_bdev1", 00:16:01.879 "uuid": "f29ae053-2b4a-4749-989f-593cce0d751c", 00:16:01.879 "strip_size_kb": 0, 00:16:01.879 "state": "online", 00:16:01.879 "raid_level": "raid1", 00:16:01.879 "superblock": true, 00:16:01.879 "num_base_bdevs": 4, 00:16:01.879 "num_base_bdevs_discovered": 3, 00:16:01.879 "num_base_bdevs_operational": 3, 00:16:01.879 "process": { 00:16:01.879 "type": "rebuild", 00:16:01.879 "target": "spare", 00:16:01.879 "progress": { 00:16:01.879 "blocks": 26624, 00:16:01.879 "percent": 41 00:16:01.879 } 00:16:01.879 }, 00:16:01.879 "base_bdevs_list": [ 00:16:01.879 { 00:16:01.879 "name": "spare", 00:16:01.879 "uuid": "8ce9c465-1fae-558a-b9df-035a6f460050", 00:16:01.879 "is_configured": true, 00:16:01.879 "data_offset": 2048, 00:16:01.879 "data_size": 63488 00:16:01.879 }, 00:16:01.879 { 00:16:01.879 "name": null, 00:16:01.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.879 "is_configured": false, 00:16:01.879 "data_offset": 0, 00:16:01.879 "data_size": 63488 00:16:01.879 }, 00:16:01.879 { 00:16:01.879 "name": "BaseBdev3", 00:16:01.879 "uuid": "5d0f54ba-b374-5b15-964b-d60dead2c35b", 00:16:01.879 "is_configured": true, 00:16:01.879 "data_offset": 2048, 00:16:01.879 "data_size": 63488 00:16:01.879 }, 00:16:01.879 { 00:16:01.879 "name": "BaseBdev4", 00:16:01.879 "uuid": "9df2f5d1-004e-5b8e-bed0-34cd18f51d6a", 00:16:01.879 "is_configured": true, 00:16:01.879 "data_offset": 2048, 00:16:01.879 "data_size": 63488 00:16:01.879 } 00:16:01.879 ] 00:16:01.879 }' 00:16:01.879 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.879 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.879 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.879 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.879 04:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:02.819 04:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:02.819 04:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.819 04:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.819 04:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.819 04:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.819 04:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.819 04:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.819 04:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.819 04:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.819 04:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.077 04:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.077 04:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.077 "name": "raid_bdev1", 00:16:03.077 "uuid": "f29ae053-2b4a-4749-989f-593cce0d751c", 00:16:03.077 "strip_size_kb": 0, 00:16:03.077 "state": "online", 00:16:03.077 "raid_level": "raid1", 00:16:03.077 "superblock": true, 00:16:03.077 "num_base_bdevs": 4, 00:16:03.077 "num_base_bdevs_discovered": 3, 00:16:03.077 "num_base_bdevs_operational": 3, 00:16:03.077 "process": { 00:16:03.077 "type": "rebuild", 00:16:03.077 "target": "spare", 00:16:03.077 "progress": { 00:16:03.077 "blocks": 51200, 00:16:03.077 "percent": 80 00:16:03.077 } 00:16:03.077 }, 00:16:03.077 "base_bdevs_list": [ 00:16:03.077 { 00:16:03.077 "name": "spare", 00:16:03.077 "uuid": "8ce9c465-1fae-558a-b9df-035a6f460050", 00:16:03.077 "is_configured": true, 00:16:03.077 "data_offset": 2048, 00:16:03.077 "data_size": 63488 00:16:03.077 }, 00:16:03.077 { 00:16:03.077 "name": null, 00:16:03.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.077 "is_configured": false, 00:16:03.077 "data_offset": 0, 00:16:03.077 "data_size": 63488 00:16:03.077 }, 00:16:03.077 { 00:16:03.077 "name": "BaseBdev3", 00:16:03.077 "uuid": "5d0f54ba-b374-5b15-964b-d60dead2c35b", 00:16:03.077 "is_configured": true, 00:16:03.077 "data_offset": 2048, 00:16:03.077 "data_size": 63488 00:16:03.077 }, 00:16:03.077 { 00:16:03.077 "name": "BaseBdev4", 00:16:03.077 "uuid": "9df2f5d1-004e-5b8e-bed0-34cd18f51d6a", 00:16:03.077 "is_configured": true, 00:16:03.077 "data_offset": 2048, 00:16:03.077 "data_size": 63488 00:16:03.077 } 00:16:03.077 ] 00:16:03.077 }' 00:16:03.077 04:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.077 04:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.077 04:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.077 04:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.077 04:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:03.660 [2024-11-27 04:33:00.001129] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:03.660 [2024-11-27 04:33:00.001217] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:03.660 [2024-11-27 04:33:00.001375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.237 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.237 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.237 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.237 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.237 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.237 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.237 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.237 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.237 04:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.237 04:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.237 04:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.237 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.237 "name": "raid_bdev1", 00:16:04.237 "uuid": "f29ae053-2b4a-4749-989f-593cce0d751c", 00:16:04.237 "strip_size_kb": 0, 00:16:04.237 "state": "online", 00:16:04.237 "raid_level": "raid1", 00:16:04.237 "superblock": true, 00:16:04.237 "num_base_bdevs": 4, 00:16:04.237 "num_base_bdevs_discovered": 3, 00:16:04.237 "num_base_bdevs_operational": 3, 00:16:04.237 "base_bdevs_list": [ 00:16:04.237 { 00:16:04.237 "name": "spare", 00:16:04.237 "uuid": "8ce9c465-1fae-558a-b9df-035a6f460050", 00:16:04.237 "is_configured": true, 00:16:04.237 "data_offset": 2048, 00:16:04.237 "data_size": 63488 00:16:04.237 }, 00:16:04.237 { 00:16:04.237 "name": null, 00:16:04.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.237 "is_configured": false, 00:16:04.237 "data_offset": 0, 00:16:04.237 "data_size": 63488 00:16:04.237 }, 00:16:04.237 { 00:16:04.237 "name": "BaseBdev3", 00:16:04.237 "uuid": "5d0f54ba-b374-5b15-964b-d60dead2c35b", 00:16:04.237 "is_configured": true, 00:16:04.237 "data_offset": 2048, 00:16:04.237 "data_size": 63488 00:16:04.237 }, 00:16:04.237 { 00:16:04.237 "name": "BaseBdev4", 00:16:04.237 "uuid": "9df2f5d1-004e-5b8e-bed0-34cd18f51d6a", 00:16:04.237 "is_configured": true, 00:16:04.237 "data_offset": 2048, 00:16:04.237 "data_size": 63488 00:16:04.237 } 00:16:04.237 ] 00:16:04.237 }' 00:16:04.237 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.237 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:04.237 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.237 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:04.237 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:04.237 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:04.237 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.237 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:04.237 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:04.237 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.237 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.237 04:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.237 04:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.237 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.238 04:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.238 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.238 "name": "raid_bdev1", 00:16:04.238 "uuid": "f29ae053-2b4a-4749-989f-593cce0d751c", 00:16:04.238 "strip_size_kb": 0, 00:16:04.238 "state": "online", 00:16:04.238 "raid_level": "raid1", 00:16:04.238 "superblock": true, 00:16:04.238 "num_base_bdevs": 4, 00:16:04.238 "num_base_bdevs_discovered": 3, 00:16:04.238 "num_base_bdevs_operational": 3, 00:16:04.238 "base_bdevs_list": [ 00:16:04.238 { 00:16:04.238 "name": "spare", 00:16:04.238 "uuid": "8ce9c465-1fae-558a-b9df-035a6f460050", 00:16:04.238 "is_configured": true, 00:16:04.238 "data_offset": 2048, 00:16:04.238 "data_size": 63488 00:16:04.238 }, 00:16:04.238 { 00:16:04.238 "name": null, 00:16:04.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.238 "is_configured": false, 00:16:04.238 "data_offset": 0, 00:16:04.238 "data_size": 63488 00:16:04.238 }, 00:16:04.238 { 00:16:04.238 "name": "BaseBdev3", 00:16:04.238 "uuid": "5d0f54ba-b374-5b15-964b-d60dead2c35b", 00:16:04.238 "is_configured": true, 00:16:04.238 "data_offset": 2048, 00:16:04.238 "data_size": 63488 00:16:04.238 }, 00:16:04.238 { 00:16:04.238 "name": "BaseBdev4", 00:16:04.238 "uuid": "9df2f5d1-004e-5b8e-bed0-34cd18f51d6a", 00:16:04.238 "is_configured": true, 00:16:04.238 "data_offset": 2048, 00:16:04.238 "data_size": 63488 00:16:04.238 } 00:16:04.238 ] 00:16:04.238 }' 00:16:04.238 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.238 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:04.238 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.497 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:04.497 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:04.497 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.497 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.497 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.497 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.497 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:04.497 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.497 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.497 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.497 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.497 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.498 04:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.498 04:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.498 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.498 04:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.498 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.498 "name": "raid_bdev1", 00:16:04.498 "uuid": "f29ae053-2b4a-4749-989f-593cce0d751c", 00:16:04.498 "strip_size_kb": 0, 00:16:04.498 "state": "online", 00:16:04.498 "raid_level": "raid1", 00:16:04.498 "superblock": true, 00:16:04.498 "num_base_bdevs": 4, 00:16:04.498 "num_base_bdevs_discovered": 3, 00:16:04.498 "num_base_bdevs_operational": 3, 00:16:04.498 "base_bdevs_list": [ 00:16:04.498 { 00:16:04.498 "name": "spare", 00:16:04.498 "uuid": "8ce9c465-1fae-558a-b9df-035a6f460050", 00:16:04.498 "is_configured": true, 00:16:04.498 "data_offset": 2048, 00:16:04.498 "data_size": 63488 00:16:04.498 }, 00:16:04.498 { 00:16:04.498 "name": null, 00:16:04.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.498 "is_configured": false, 00:16:04.498 "data_offset": 0, 00:16:04.498 "data_size": 63488 00:16:04.498 }, 00:16:04.498 { 00:16:04.498 "name": "BaseBdev3", 00:16:04.498 "uuid": "5d0f54ba-b374-5b15-964b-d60dead2c35b", 00:16:04.498 "is_configured": true, 00:16:04.498 "data_offset": 2048, 00:16:04.498 "data_size": 63488 00:16:04.498 }, 00:16:04.498 { 00:16:04.498 "name": "BaseBdev4", 00:16:04.498 "uuid": "9df2f5d1-004e-5b8e-bed0-34cd18f51d6a", 00:16:04.498 "is_configured": true, 00:16:04.498 "data_offset": 2048, 00:16:04.498 "data_size": 63488 00:16:04.498 } 00:16:04.498 ] 00:16:04.498 }' 00:16:04.498 04:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.498 04:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.758 04:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:04.758 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.758 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.758 [2024-11-27 04:33:01.294495] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:04.758 [2024-11-27 04:33:01.294532] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:04.758 [2024-11-27 04:33:01.294628] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:04.758 [2024-11-27 04:33:01.294728] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:04.758 [2024-11-27 04:33:01.294743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:04.758 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.758 04:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.758 04:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:04.758 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.758 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.758 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:05.019 /dev/nbd0 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:05.019 1+0 records in 00:16:05.019 1+0 records out 00:16:05.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240381 s, 17.0 MB/s 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:05.019 04:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:05.279 /dev/nbd1 00:16:05.279 04:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:05.279 04:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:05.279 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:05.279 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:05.279 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:05.279 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:05.279 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:05.279 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:05.279 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:05.279 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:05.279 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:05.279 1+0 records in 00:16:05.279 1+0 records out 00:16:05.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443495 s, 9.2 MB/s 00:16:05.279 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.279 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:05.279 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.279 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:05.279 04:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:05.279 04:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:05.279 04:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:05.279 04:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:05.538 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:05.538 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.538 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:05.538 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:05.538 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:05.538 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.538 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:05.798 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:05.798 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:05.798 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:05.798 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.798 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.798 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:05.798 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:05.798 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.798 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.798 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:06.058 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:06.058 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:06.058 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:06.058 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.058 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.058 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:06.058 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:06.058 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.058 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:06.058 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:06.058 04:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.058 04:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.058 04:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.058 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:06.058 04:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.058 04:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.058 [2024-11-27 04:33:02.529531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:06.058 [2024-11-27 04:33:02.529615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.058 [2024-11-27 04:33:02.529643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:06.058 [2024-11-27 04:33:02.529653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.058 [2024-11-27 04:33:02.532036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.058 [2024-11-27 04:33:02.532080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:06.058 [2024-11-27 04:33:02.532198] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:06.058 [2024-11-27 04:33:02.532254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:06.058 [2024-11-27 04:33:02.532443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:06.058 [2024-11-27 04:33:02.532547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:06.058 spare 00:16:06.058 04:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.058 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:06.058 04:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.058 04:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.058 [2024-11-27 04:33:02.632466] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:06.058 [2024-11-27 04:33:02.632506] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:06.058 [2024-11-27 04:33:02.632908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:16:06.059 [2024-11-27 04:33:02.633164] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:06.059 [2024-11-27 04:33:02.633189] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:06.059 [2024-11-27 04:33:02.633437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.059 04:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.059 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:06.059 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.059 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.059 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.059 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.059 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.059 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.059 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.059 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.059 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.318 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.318 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.318 04:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.318 04:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.318 04:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.318 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.318 "name": "raid_bdev1", 00:16:06.318 "uuid": "f29ae053-2b4a-4749-989f-593cce0d751c", 00:16:06.318 "strip_size_kb": 0, 00:16:06.318 "state": "online", 00:16:06.318 "raid_level": "raid1", 00:16:06.318 "superblock": true, 00:16:06.318 "num_base_bdevs": 4, 00:16:06.318 "num_base_bdevs_discovered": 3, 00:16:06.318 "num_base_bdevs_operational": 3, 00:16:06.318 "base_bdevs_list": [ 00:16:06.318 { 00:16:06.318 "name": "spare", 00:16:06.318 "uuid": "8ce9c465-1fae-558a-b9df-035a6f460050", 00:16:06.318 "is_configured": true, 00:16:06.318 "data_offset": 2048, 00:16:06.318 "data_size": 63488 00:16:06.318 }, 00:16:06.318 { 00:16:06.318 "name": null, 00:16:06.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.318 "is_configured": false, 00:16:06.318 "data_offset": 2048, 00:16:06.318 "data_size": 63488 00:16:06.318 }, 00:16:06.318 { 00:16:06.318 "name": "BaseBdev3", 00:16:06.318 "uuid": "5d0f54ba-b374-5b15-964b-d60dead2c35b", 00:16:06.318 "is_configured": true, 00:16:06.318 "data_offset": 2048, 00:16:06.318 "data_size": 63488 00:16:06.318 }, 00:16:06.318 { 00:16:06.318 "name": "BaseBdev4", 00:16:06.318 "uuid": "9df2f5d1-004e-5b8e-bed0-34cd18f51d6a", 00:16:06.318 "is_configured": true, 00:16:06.318 "data_offset": 2048, 00:16:06.318 "data_size": 63488 00:16:06.318 } 00:16:06.318 ] 00:16:06.318 }' 00:16:06.318 04:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.318 04:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.577 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.577 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.577 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.577 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.577 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.577 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.577 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.577 04:33:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.577 04:33:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.577 04:33:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.577 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.577 "name": "raid_bdev1", 00:16:06.577 "uuid": "f29ae053-2b4a-4749-989f-593cce0d751c", 00:16:06.577 "strip_size_kb": 0, 00:16:06.578 "state": "online", 00:16:06.578 "raid_level": "raid1", 00:16:06.578 "superblock": true, 00:16:06.578 "num_base_bdevs": 4, 00:16:06.578 "num_base_bdevs_discovered": 3, 00:16:06.578 "num_base_bdevs_operational": 3, 00:16:06.578 "base_bdevs_list": [ 00:16:06.578 { 00:16:06.578 "name": "spare", 00:16:06.578 "uuid": "8ce9c465-1fae-558a-b9df-035a6f460050", 00:16:06.578 "is_configured": true, 00:16:06.578 "data_offset": 2048, 00:16:06.578 "data_size": 63488 00:16:06.578 }, 00:16:06.578 { 00:16:06.578 "name": null, 00:16:06.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.578 "is_configured": false, 00:16:06.578 "data_offset": 2048, 00:16:06.578 "data_size": 63488 00:16:06.578 }, 00:16:06.578 { 00:16:06.578 "name": "BaseBdev3", 00:16:06.578 "uuid": "5d0f54ba-b374-5b15-964b-d60dead2c35b", 00:16:06.578 "is_configured": true, 00:16:06.578 "data_offset": 2048, 00:16:06.578 "data_size": 63488 00:16:06.578 }, 00:16:06.578 { 00:16:06.578 "name": "BaseBdev4", 00:16:06.578 "uuid": "9df2f5d1-004e-5b8e-bed0-34cd18f51d6a", 00:16:06.578 "is_configured": true, 00:16:06.578 "data_offset": 2048, 00:16:06.578 "data_size": 63488 00:16:06.578 } 00:16:06.578 ] 00:16:06.578 }' 00:16:06.578 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.838 [2024-11-27 04:33:03.316356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.838 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.838 "name": "raid_bdev1", 00:16:06.838 "uuid": "f29ae053-2b4a-4749-989f-593cce0d751c", 00:16:06.838 "strip_size_kb": 0, 00:16:06.838 "state": "online", 00:16:06.838 "raid_level": "raid1", 00:16:06.838 "superblock": true, 00:16:06.838 "num_base_bdevs": 4, 00:16:06.838 "num_base_bdevs_discovered": 2, 00:16:06.839 "num_base_bdevs_operational": 2, 00:16:06.839 "base_bdevs_list": [ 00:16:06.839 { 00:16:06.839 "name": null, 00:16:06.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.839 "is_configured": false, 00:16:06.839 "data_offset": 0, 00:16:06.839 "data_size": 63488 00:16:06.839 }, 00:16:06.839 { 00:16:06.839 "name": null, 00:16:06.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.839 "is_configured": false, 00:16:06.839 "data_offset": 2048, 00:16:06.839 "data_size": 63488 00:16:06.839 }, 00:16:06.839 { 00:16:06.839 "name": "BaseBdev3", 00:16:06.839 "uuid": "5d0f54ba-b374-5b15-964b-d60dead2c35b", 00:16:06.839 "is_configured": true, 00:16:06.839 "data_offset": 2048, 00:16:06.839 "data_size": 63488 00:16:06.839 }, 00:16:06.839 { 00:16:06.839 "name": "BaseBdev4", 00:16:06.839 "uuid": "9df2f5d1-004e-5b8e-bed0-34cd18f51d6a", 00:16:06.839 "is_configured": true, 00:16:06.839 "data_offset": 2048, 00:16:06.839 "data_size": 63488 00:16:06.839 } 00:16:06.839 ] 00:16:06.839 }' 00:16:06.839 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.839 04:33:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.409 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:07.409 04:33:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.409 04:33:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.409 [2024-11-27 04:33:03.743676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.409 [2024-11-27 04:33:03.743915] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:07.409 [2024-11-27 04:33:03.743939] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:07.409 [2024-11-27 04:33:03.743983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.409 [2024-11-27 04:33:03.759390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:16:07.409 04:33:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.409 04:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:07.409 [2024-11-27 04:33:03.761336] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:08.347 04:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.347 04:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.347 04:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.347 04:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.347 04:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.347 04:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.347 04:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.347 04:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.347 04:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.347 04:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.347 04:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.347 "name": "raid_bdev1", 00:16:08.347 "uuid": "f29ae053-2b4a-4749-989f-593cce0d751c", 00:16:08.347 "strip_size_kb": 0, 00:16:08.347 "state": "online", 00:16:08.347 "raid_level": "raid1", 00:16:08.347 "superblock": true, 00:16:08.347 "num_base_bdevs": 4, 00:16:08.347 "num_base_bdevs_discovered": 3, 00:16:08.347 "num_base_bdevs_operational": 3, 00:16:08.347 "process": { 00:16:08.347 "type": "rebuild", 00:16:08.347 "target": "spare", 00:16:08.347 "progress": { 00:16:08.347 "blocks": 20480, 00:16:08.347 "percent": 32 00:16:08.347 } 00:16:08.347 }, 00:16:08.347 "base_bdevs_list": [ 00:16:08.347 { 00:16:08.347 "name": "spare", 00:16:08.347 "uuid": "8ce9c465-1fae-558a-b9df-035a6f460050", 00:16:08.347 "is_configured": true, 00:16:08.347 "data_offset": 2048, 00:16:08.347 "data_size": 63488 00:16:08.347 }, 00:16:08.347 { 00:16:08.347 "name": null, 00:16:08.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.347 "is_configured": false, 00:16:08.347 "data_offset": 2048, 00:16:08.347 "data_size": 63488 00:16:08.347 }, 00:16:08.347 { 00:16:08.347 "name": "BaseBdev3", 00:16:08.347 "uuid": "5d0f54ba-b374-5b15-964b-d60dead2c35b", 00:16:08.347 "is_configured": true, 00:16:08.347 "data_offset": 2048, 00:16:08.347 "data_size": 63488 00:16:08.347 }, 00:16:08.347 { 00:16:08.347 "name": "BaseBdev4", 00:16:08.347 "uuid": "9df2f5d1-004e-5b8e-bed0-34cd18f51d6a", 00:16:08.347 "is_configured": true, 00:16:08.347 "data_offset": 2048, 00:16:08.347 "data_size": 63488 00:16:08.347 } 00:16:08.347 ] 00:16:08.347 }' 00:16:08.347 04:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.347 04:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.347 04:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.347 04:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.347 04:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:08.347 04:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.347 04:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.347 [2024-11-27 04:33:04.912771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:08.606 [2024-11-27 04:33:04.966980] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:08.606 [2024-11-27 04:33:04.967057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.606 [2024-11-27 04:33:04.967075] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:08.606 [2024-11-27 04:33:04.967092] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:08.606 04:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.606 04:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:08.606 04:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.606 04:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.606 04:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:08.606 04:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:08.606 04:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:08.606 04:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.606 04:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.606 04:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.606 04:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.606 04:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.606 04:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.606 04:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.606 04:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.606 04:33:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.606 04:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.606 "name": "raid_bdev1", 00:16:08.606 "uuid": "f29ae053-2b4a-4749-989f-593cce0d751c", 00:16:08.606 "strip_size_kb": 0, 00:16:08.606 "state": "online", 00:16:08.606 "raid_level": "raid1", 00:16:08.606 "superblock": true, 00:16:08.606 "num_base_bdevs": 4, 00:16:08.606 "num_base_bdevs_discovered": 2, 00:16:08.606 "num_base_bdevs_operational": 2, 00:16:08.606 "base_bdevs_list": [ 00:16:08.606 { 00:16:08.606 "name": null, 00:16:08.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.606 "is_configured": false, 00:16:08.606 "data_offset": 0, 00:16:08.606 "data_size": 63488 00:16:08.606 }, 00:16:08.606 { 00:16:08.606 "name": null, 00:16:08.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.606 "is_configured": false, 00:16:08.606 "data_offset": 2048, 00:16:08.606 "data_size": 63488 00:16:08.606 }, 00:16:08.606 { 00:16:08.606 "name": "BaseBdev3", 00:16:08.606 "uuid": "5d0f54ba-b374-5b15-964b-d60dead2c35b", 00:16:08.606 "is_configured": true, 00:16:08.606 "data_offset": 2048, 00:16:08.606 "data_size": 63488 00:16:08.606 }, 00:16:08.606 { 00:16:08.606 "name": "BaseBdev4", 00:16:08.606 "uuid": "9df2f5d1-004e-5b8e-bed0-34cd18f51d6a", 00:16:08.606 "is_configured": true, 00:16:08.606 "data_offset": 2048, 00:16:08.606 "data_size": 63488 00:16:08.606 } 00:16:08.606 ] 00:16:08.606 }' 00:16:08.606 04:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.606 04:33:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.174 04:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:09.174 04:33:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.174 04:33:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.174 [2024-11-27 04:33:05.465173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:09.174 [2024-11-27 04:33:05.465247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.174 [2024-11-27 04:33:05.465287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:09.174 [2024-11-27 04:33:05.465298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.174 [2024-11-27 04:33:05.465838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.174 [2024-11-27 04:33:05.465868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:09.174 [2024-11-27 04:33:05.465980] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:09.174 [2024-11-27 04:33:05.466001] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:09.174 [2024-11-27 04:33:05.466021] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:09.174 [2024-11-27 04:33:05.466043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:09.174 [2024-11-27 04:33:05.482999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:16:09.174 spare 00:16:09.174 04:33:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.174 04:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:09.174 [2024-11-27 04:33:05.485254] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:10.111 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.111 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.111 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.111 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.111 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.111 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.111 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.111 04:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.111 04:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.111 04:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.111 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.111 "name": "raid_bdev1", 00:16:10.111 "uuid": "f29ae053-2b4a-4749-989f-593cce0d751c", 00:16:10.111 "strip_size_kb": 0, 00:16:10.111 "state": "online", 00:16:10.111 "raid_level": "raid1", 00:16:10.111 "superblock": true, 00:16:10.111 "num_base_bdevs": 4, 00:16:10.111 "num_base_bdevs_discovered": 3, 00:16:10.111 "num_base_bdevs_operational": 3, 00:16:10.111 "process": { 00:16:10.111 "type": "rebuild", 00:16:10.111 "target": "spare", 00:16:10.111 "progress": { 00:16:10.111 "blocks": 20480, 00:16:10.111 "percent": 32 00:16:10.111 } 00:16:10.111 }, 00:16:10.111 "base_bdevs_list": [ 00:16:10.111 { 00:16:10.111 "name": "spare", 00:16:10.111 "uuid": "8ce9c465-1fae-558a-b9df-035a6f460050", 00:16:10.111 "is_configured": true, 00:16:10.111 "data_offset": 2048, 00:16:10.111 "data_size": 63488 00:16:10.111 }, 00:16:10.111 { 00:16:10.111 "name": null, 00:16:10.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.111 "is_configured": false, 00:16:10.111 "data_offset": 2048, 00:16:10.111 "data_size": 63488 00:16:10.111 }, 00:16:10.111 { 00:16:10.111 "name": "BaseBdev3", 00:16:10.111 "uuid": "5d0f54ba-b374-5b15-964b-d60dead2c35b", 00:16:10.111 "is_configured": true, 00:16:10.111 "data_offset": 2048, 00:16:10.111 "data_size": 63488 00:16:10.111 }, 00:16:10.111 { 00:16:10.111 "name": "BaseBdev4", 00:16:10.111 "uuid": "9df2f5d1-004e-5b8e-bed0-34cd18f51d6a", 00:16:10.111 "is_configured": true, 00:16:10.111 "data_offset": 2048, 00:16:10.111 "data_size": 63488 00:16:10.111 } 00:16:10.111 ] 00:16:10.111 }' 00:16:10.111 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.111 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.111 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.111 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.111 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:10.111 04:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.111 04:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.111 [2024-11-27 04:33:06.640143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.111 [2024-11-27 04:33:06.691044] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:10.111 [2024-11-27 04:33:06.691126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.111 [2024-11-27 04:33:06.691161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.111 [2024-11-27 04:33:06.691171] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:10.370 04:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.370 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:10.370 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.370 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.370 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.370 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.370 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:10.370 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.370 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.370 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.370 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.370 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.370 04:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.370 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.370 04:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.370 04:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.370 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.370 "name": "raid_bdev1", 00:16:10.370 "uuid": "f29ae053-2b4a-4749-989f-593cce0d751c", 00:16:10.370 "strip_size_kb": 0, 00:16:10.370 "state": "online", 00:16:10.370 "raid_level": "raid1", 00:16:10.370 "superblock": true, 00:16:10.370 "num_base_bdevs": 4, 00:16:10.370 "num_base_bdevs_discovered": 2, 00:16:10.370 "num_base_bdevs_operational": 2, 00:16:10.370 "base_bdevs_list": [ 00:16:10.370 { 00:16:10.370 "name": null, 00:16:10.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.370 "is_configured": false, 00:16:10.370 "data_offset": 0, 00:16:10.370 "data_size": 63488 00:16:10.370 }, 00:16:10.370 { 00:16:10.370 "name": null, 00:16:10.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.370 "is_configured": false, 00:16:10.370 "data_offset": 2048, 00:16:10.370 "data_size": 63488 00:16:10.370 }, 00:16:10.370 { 00:16:10.370 "name": "BaseBdev3", 00:16:10.370 "uuid": "5d0f54ba-b374-5b15-964b-d60dead2c35b", 00:16:10.370 "is_configured": true, 00:16:10.370 "data_offset": 2048, 00:16:10.370 "data_size": 63488 00:16:10.370 }, 00:16:10.370 { 00:16:10.370 "name": "BaseBdev4", 00:16:10.370 "uuid": "9df2f5d1-004e-5b8e-bed0-34cd18f51d6a", 00:16:10.370 "is_configured": true, 00:16:10.370 "data_offset": 2048, 00:16:10.370 "data_size": 63488 00:16:10.370 } 00:16:10.370 ] 00:16:10.370 }' 00:16:10.370 04:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.370 04:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.630 04:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:10.630 04:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.630 04:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:10.630 04:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:10.630 04:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.630 04:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.630 04:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.630 04:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.630 04:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.630 04:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.887 04:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.887 "name": "raid_bdev1", 00:16:10.887 "uuid": "f29ae053-2b4a-4749-989f-593cce0d751c", 00:16:10.887 "strip_size_kb": 0, 00:16:10.887 "state": "online", 00:16:10.887 "raid_level": "raid1", 00:16:10.887 "superblock": true, 00:16:10.887 "num_base_bdevs": 4, 00:16:10.887 "num_base_bdevs_discovered": 2, 00:16:10.887 "num_base_bdevs_operational": 2, 00:16:10.887 "base_bdevs_list": [ 00:16:10.887 { 00:16:10.887 "name": null, 00:16:10.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.887 "is_configured": false, 00:16:10.887 "data_offset": 0, 00:16:10.887 "data_size": 63488 00:16:10.887 }, 00:16:10.887 { 00:16:10.887 "name": null, 00:16:10.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.887 "is_configured": false, 00:16:10.887 "data_offset": 2048, 00:16:10.887 "data_size": 63488 00:16:10.887 }, 00:16:10.887 { 00:16:10.887 "name": "BaseBdev3", 00:16:10.887 "uuid": "5d0f54ba-b374-5b15-964b-d60dead2c35b", 00:16:10.887 "is_configured": true, 00:16:10.887 "data_offset": 2048, 00:16:10.887 "data_size": 63488 00:16:10.887 }, 00:16:10.887 { 00:16:10.887 "name": "BaseBdev4", 00:16:10.887 "uuid": "9df2f5d1-004e-5b8e-bed0-34cd18f51d6a", 00:16:10.887 "is_configured": true, 00:16:10.887 "data_offset": 2048, 00:16:10.888 "data_size": 63488 00:16:10.888 } 00:16:10.888 ] 00:16:10.888 }' 00:16:10.888 04:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.888 04:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:10.888 04:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.888 04:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:10.888 04:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:10.888 04:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.888 04:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.888 04:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.888 04:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:10.888 04:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.888 04:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.888 [2024-11-27 04:33:07.344827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:10.888 [2024-11-27 04:33:07.344901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.888 [2024-11-27 04:33:07.344926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:10.888 [2024-11-27 04:33:07.344937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.888 [2024-11-27 04:33:07.345462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.888 [2024-11-27 04:33:07.345496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:10.888 [2024-11-27 04:33:07.345588] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:10.888 [2024-11-27 04:33:07.345613] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:10.888 [2024-11-27 04:33:07.345622] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:10.888 [2024-11-27 04:33:07.345648] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:10.888 BaseBdev1 00:16:10.888 04:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.888 04:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:11.824 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:11.824 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.824 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.824 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.824 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.824 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:11.824 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.824 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.824 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.824 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.824 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.824 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.824 04:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.824 04:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.824 04:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.824 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.824 "name": "raid_bdev1", 00:16:11.824 "uuid": "f29ae053-2b4a-4749-989f-593cce0d751c", 00:16:11.824 "strip_size_kb": 0, 00:16:11.824 "state": "online", 00:16:11.824 "raid_level": "raid1", 00:16:11.824 "superblock": true, 00:16:11.824 "num_base_bdevs": 4, 00:16:11.824 "num_base_bdevs_discovered": 2, 00:16:11.824 "num_base_bdevs_operational": 2, 00:16:11.824 "base_bdevs_list": [ 00:16:11.824 { 00:16:11.824 "name": null, 00:16:11.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.824 "is_configured": false, 00:16:11.824 "data_offset": 0, 00:16:11.824 "data_size": 63488 00:16:11.824 }, 00:16:11.824 { 00:16:11.824 "name": null, 00:16:11.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.824 "is_configured": false, 00:16:11.824 "data_offset": 2048, 00:16:11.824 "data_size": 63488 00:16:11.824 }, 00:16:11.824 { 00:16:11.824 "name": "BaseBdev3", 00:16:11.824 "uuid": "5d0f54ba-b374-5b15-964b-d60dead2c35b", 00:16:11.824 "is_configured": true, 00:16:11.824 "data_offset": 2048, 00:16:11.824 "data_size": 63488 00:16:11.824 }, 00:16:11.824 { 00:16:11.824 "name": "BaseBdev4", 00:16:11.824 "uuid": "9df2f5d1-004e-5b8e-bed0-34cd18f51d6a", 00:16:11.824 "is_configured": true, 00:16:11.824 "data_offset": 2048, 00:16:11.824 "data_size": 63488 00:16:11.824 } 00:16:11.824 ] 00:16:11.824 }' 00:16:11.824 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.824 04:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.393 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.393 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.393 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.393 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.393 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.393 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.393 04:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.393 04:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.393 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.393 04:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.393 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.393 "name": "raid_bdev1", 00:16:12.393 "uuid": "f29ae053-2b4a-4749-989f-593cce0d751c", 00:16:12.393 "strip_size_kb": 0, 00:16:12.393 "state": "online", 00:16:12.393 "raid_level": "raid1", 00:16:12.393 "superblock": true, 00:16:12.393 "num_base_bdevs": 4, 00:16:12.393 "num_base_bdevs_discovered": 2, 00:16:12.393 "num_base_bdevs_operational": 2, 00:16:12.393 "base_bdevs_list": [ 00:16:12.393 { 00:16:12.393 "name": null, 00:16:12.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.393 "is_configured": false, 00:16:12.394 "data_offset": 0, 00:16:12.394 "data_size": 63488 00:16:12.394 }, 00:16:12.394 { 00:16:12.394 "name": null, 00:16:12.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.394 "is_configured": false, 00:16:12.394 "data_offset": 2048, 00:16:12.394 "data_size": 63488 00:16:12.394 }, 00:16:12.394 { 00:16:12.394 "name": "BaseBdev3", 00:16:12.394 "uuid": "5d0f54ba-b374-5b15-964b-d60dead2c35b", 00:16:12.394 "is_configured": true, 00:16:12.394 "data_offset": 2048, 00:16:12.394 "data_size": 63488 00:16:12.394 }, 00:16:12.394 { 00:16:12.394 "name": "BaseBdev4", 00:16:12.394 "uuid": "9df2f5d1-004e-5b8e-bed0-34cd18f51d6a", 00:16:12.394 "is_configured": true, 00:16:12.394 "data_offset": 2048, 00:16:12.394 "data_size": 63488 00:16:12.394 } 00:16:12.394 ] 00:16:12.394 }' 00:16:12.394 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.394 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.394 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.394 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:12.394 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:12.394 04:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:12.394 04:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:12.394 04:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:12.394 04:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:12.394 04:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:12.394 04:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:12.394 04:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:12.394 04:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.394 04:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.394 [2024-11-27 04:33:08.974119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:12.394 [2024-11-27 04:33:08.974330] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:12.394 [2024-11-27 04:33:08.974351] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:12.653 request: 00:16:12.653 { 00:16:12.653 "base_bdev": "BaseBdev1", 00:16:12.653 "raid_bdev": "raid_bdev1", 00:16:12.653 "method": "bdev_raid_add_base_bdev", 00:16:12.653 "req_id": 1 00:16:12.653 } 00:16:12.653 Got JSON-RPC error response 00:16:12.653 response: 00:16:12.653 { 00:16:12.653 "code": -22, 00:16:12.653 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:12.653 } 00:16:12.653 04:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:12.653 04:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:12.653 04:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:12.653 04:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:12.653 04:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:12.653 04:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:13.590 04:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:13.590 04:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.590 04:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.590 04:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:13.590 04:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:13.590 04:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:13.590 04:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.590 04:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.590 04:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.590 04:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.590 04:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.590 04:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.590 04:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.590 04:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.590 04:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.590 04:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.590 "name": "raid_bdev1", 00:16:13.590 "uuid": "f29ae053-2b4a-4749-989f-593cce0d751c", 00:16:13.590 "strip_size_kb": 0, 00:16:13.590 "state": "online", 00:16:13.590 "raid_level": "raid1", 00:16:13.590 "superblock": true, 00:16:13.590 "num_base_bdevs": 4, 00:16:13.590 "num_base_bdevs_discovered": 2, 00:16:13.590 "num_base_bdevs_operational": 2, 00:16:13.590 "base_bdevs_list": [ 00:16:13.590 { 00:16:13.590 "name": null, 00:16:13.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.590 "is_configured": false, 00:16:13.590 "data_offset": 0, 00:16:13.590 "data_size": 63488 00:16:13.590 }, 00:16:13.590 { 00:16:13.590 "name": null, 00:16:13.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.590 "is_configured": false, 00:16:13.590 "data_offset": 2048, 00:16:13.590 "data_size": 63488 00:16:13.590 }, 00:16:13.590 { 00:16:13.590 "name": "BaseBdev3", 00:16:13.590 "uuid": "5d0f54ba-b374-5b15-964b-d60dead2c35b", 00:16:13.590 "is_configured": true, 00:16:13.590 "data_offset": 2048, 00:16:13.590 "data_size": 63488 00:16:13.590 }, 00:16:13.590 { 00:16:13.590 "name": "BaseBdev4", 00:16:13.590 "uuid": "9df2f5d1-004e-5b8e-bed0-34cd18f51d6a", 00:16:13.590 "is_configured": true, 00:16:13.590 "data_offset": 2048, 00:16:13.590 "data_size": 63488 00:16:13.590 } 00:16:13.590 ] 00:16:13.590 }' 00:16:13.590 04:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.590 04:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.183 04:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:14.183 04:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.183 04:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:14.183 04:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:14.183 04:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.183 04:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.183 04:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.183 04:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.183 04:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.183 04:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.183 04:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.183 "name": "raid_bdev1", 00:16:14.183 "uuid": "f29ae053-2b4a-4749-989f-593cce0d751c", 00:16:14.183 "strip_size_kb": 0, 00:16:14.183 "state": "online", 00:16:14.183 "raid_level": "raid1", 00:16:14.183 "superblock": true, 00:16:14.183 "num_base_bdevs": 4, 00:16:14.183 "num_base_bdevs_discovered": 2, 00:16:14.183 "num_base_bdevs_operational": 2, 00:16:14.183 "base_bdevs_list": [ 00:16:14.183 { 00:16:14.183 "name": null, 00:16:14.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.183 "is_configured": false, 00:16:14.183 "data_offset": 0, 00:16:14.183 "data_size": 63488 00:16:14.183 }, 00:16:14.183 { 00:16:14.183 "name": null, 00:16:14.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.183 "is_configured": false, 00:16:14.183 "data_offset": 2048, 00:16:14.183 "data_size": 63488 00:16:14.183 }, 00:16:14.183 { 00:16:14.183 "name": "BaseBdev3", 00:16:14.183 "uuid": "5d0f54ba-b374-5b15-964b-d60dead2c35b", 00:16:14.183 "is_configured": true, 00:16:14.183 "data_offset": 2048, 00:16:14.183 "data_size": 63488 00:16:14.183 }, 00:16:14.183 { 00:16:14.183 "name": "BaseBdev4", 00:16:14.183 "uuid": "9df2f5d1-004e-5b8e-bed0-34cd18f51d6a", 00:16:14.183 "is_configured": true, 00:16:14.183 "data_offset": 2048, 00:16:14.183 "data_size": 63488 00:16:14.183 } 00:16:14.183 ] 00:16:14.183 }' 00:16:14.183 04:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.183 04:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:14.183 04:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.183 04:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:14.183 04:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78317 00:16:14.183 04:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78317 ']' 00:16:14.183 04:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78317 00:16:14.183 04:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:14.183 04:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:14.183 04:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78317 00:16:14.183 killing process with pid 78317 00:16:14.183 Received shutdown signal, test time was about 60.000000 seconds 00:16:14.183 00:16:14.183 Latency(us) 00:16:14.183 [2024-11-27T04:33:10.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.183 [2024-11-27T04:33:10.770Z] =================================================================================================================== 00:16:14.183 [2024-11-27T04:33:10.770Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:14.183 04:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:14.183 04:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:14.183 04:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78317' 00:16:14.183 04:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78317 00:16:14.183 04:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78317 00:16:14.183 [2024-11-27 04:33:10.671747] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:14.183 [2024-11-27 04:33:10.671876] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.184 [2024-11-27 04:33:10.671973] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.184 [2024-11-27 04:33:10.671992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:14.772 [2024-11-27 04:33:11.212516] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:16.158 04:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:16.158 00:16:16.158 real 0m25.906s 00:16:16.158 user 0m31.579s 00:16:16.158 sys 0m3.754s 00:16:16.158 04:33:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.158 04:33:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.158 ************************************ 00:16:16.158 END TEST raid_rebuild_test_sb 00:16:16.158 ************************************ 00:16:16.158 04:33:12 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:16:16.158 04:33:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:16.158 04:33:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.158 04:33:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:16.158 ************************************ 00:16:16.158 START TEST raid_rebuild_test_io 00:16:16.158 ************************************ 00:16:16.158 04:33:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:16:16.158 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:16.158 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:16.158 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:16.158 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:16.158 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:16.158 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:16.158 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:16.158 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:16.158 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:16.158 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:16.158 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:16.158 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:16.158 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:16.158 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:16.158 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:16.158 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:16.159 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:16.159 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:16.159 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:16.159 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:16.159 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:16.159 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:16.159 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:16.159 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:16.159 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:16.159 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:16.159 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:16.159 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:16.159 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:16.159 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79076 00:16:16.159 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79076 00:16:16.159 04:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:16.159 04:33:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79076 ']' 00:16:16.159 04:33:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.159 04:33:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:16.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.159 04:33:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.159 04:33:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:16.159 04:33:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.159 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:16.159 Zero copy mechanism will not be used. 00:16:16.159 [2024-11-27 04:33:12.584655] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:16.159 [2024-11-27 04:33:12.584778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79076 ] 00:16:16.159 [2024-11-27 04:33:12.738794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.417 [2024-11-27 04:33:12.858737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.676 [2024-11-27 04:33:13.076806] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.676 [2024-11-27 04:33:13.076850] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.936 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.936 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:16:16.936 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:16.936 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:16.936 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.936 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.936 BaseBdev1_malloc 00:16:16.936 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.936 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:16.936 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.936 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.936 [2024-11-27 04:33:13.485742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:16.936 [2024-11-27 04:33:13.485818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.936 [2024-11-27 04:33:13.485841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:16.936 [2024-11-27 04:33:13.485852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.936 [2024-11-27 04:33:13.488214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.936 [2024-11-27 04:33:13.488263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:16.936 BaseBdev1 00:16:16.936 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.936 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:16.936 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:16.936 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.936 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.195 BaseBdev2_malloc 00:16:17.195 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.195 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:17.195 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.195 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.195 [2024-11-27 04:33:13.542073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:17.195 [2024-11-27 04:33:13.542147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.195 [2024-11-27 04:33:13.542171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:17.195 [2024-11-27 04:33:13.542182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.195 [2024-11-27 04:33:13.544477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.195 [2024-11-27 04:33:13.544520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:17.195 BaseBdev2 00:16:17.195 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.195 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:17.195 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:17.195 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.195 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.196 BaseBdev3_malloc 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.196 [2024-11-27 04:33:13.606518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:17.196 [2024-11-27 04:33:13.606594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.196 [2024-11-27 04:33:13.606617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:17.196 [2024-11-27 04:33:13.606644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.196 [2024-11-27 04:33:13.608997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.196 [2024-11-27 04:33:13.609040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:17.196 BaseBdev3 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.196 BaseBdev4_malloc 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.196 [2024-11-27 04:33:13.662379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:17.196 [2024-11-27 04:33:13.662457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.196 [2024-11-27 04:33:13.662483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:17.196 [2024-11-27 04:33:13.662495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.196 [2024-11-27 04:33:13.664952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.196 [2024-11-27 04:33:13.665021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:17.196 BaseBdev4 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.196 spare_malloc 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.196 spare_delay 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.196 [2024-11-27 04:33:13.731767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:17.196 [2024-11-27 04:33:13.731831] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.196 [2024-11-27 04:33:13.731853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:17.196 [2024-11-27 04:33:13.731865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.196 [2024-11-27 04:33:13.734269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.196 [2024-11-27 04:33:13.734314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:17.196 spare 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.196 [2024-11-27 04:33:13.743806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:17.196 [2024-11-27 04:33:13.745852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:17.196 [2024-11-27 04:33:13.745991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:17.196 [2024-11-27 04:33:13.746052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:17.196 [2024-11-27 04:33:13.746178] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:17.196 [2024-11-27 04:33:13.746193] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:17.196 [2024-11-27 04:33:13.746503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:17.196 [2024-11-27 04:33:13.746705] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:17.196 [2024-11-27 04:33:13.746719] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:17.196 [2024-11-27 04:33:13.746897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.196 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.533 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.533 "name": "raid_bdev1", 00:16:17.533 "uuid": "870eacea-abc6-41c3-bff1-c9cb34e8fdd1", 00:16:17.533 "strip_size_kb": 0, 00:16:17.533 "state": "online", 00:16:17.533 "raid_level": "raid1", 00:16:17.533 "superblock": false, 00:16:17.533 "num_base_bdevs": 4, 00:16:17.533 "num_base_bdevs_discovered": 4, 00:16:17.533 "num_base_bdevs_operational": 4, 00:16:17.533 "base_bdevs_list": [ 00:16:17.533 { 00:16:17.533 "name": "BaseBdev1", 00:16:17.533 "uuid": "86aa0088-8f4b-5671-a9c7-74f928bd9217", 00:16:17.533 "is_configured": true, 00:16:17.533 "data_offset": 0, 00:16:17.533 "data_size": 65536 00:16:17.533 }, 00:16:17.533 { 00:16:17.533 "name": "BaseBdev2", 00:16:17.533 "uuid": "d86b1d44-0da2-5fed-8a68-85c354aeed02", 00:16:17.533 "is_configured": true, 00:16:17.533 "data_offset": 0, 00:16:17.533 "data_size": 65536 00:16:17.533 }, 00:16:17.533 { 00:16:17.533 "name": "BaseBdev3", 00:16:17.533 "uuid": "f6f077a7-a4e8-55b2-aeeb-71656b58f051", 00:16:17.533 "is_configured": true, 00:16:17.533 "data_offset": 0, 00:16:17.533 "data_size": 65536 00:16:17.533 }, 00:16:17.533 { 00:16:17.533 "name": "BaseBdev4", 00:16:17.533 "uuid": "bc02d31f-02a6-560d-9183-947de37f9b13", 00:16:17.533 "is_configured": true, 00:16:17.533 "data_offset": 0, 00:16:17.533 "data_size": 65536 00:16:17.533 } 00:16:17.533 ] 00:16:17.533 }' 00:16:17.533 04:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.533 04:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.793 [2024-11-27 04:33:14.251404] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.793 [2024-11-27 04:33:14.330877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.793 04:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.054 04:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.054 "name": "raid_bdev1", 00:16:18.054 "uuid": "870eacea-abc6-41c3-bff1-c9cb34e8fdd1", 00:16:18.054 "strip_size_kb": 0, 00:16:18.054 "state": "online", 00:16:18.054 "raid_level": "raid1", 00:16:18.054 "superblock": false, 00:16:18.054 "num_base_bdevs": 4, 00:16:18.054 "num_base_bdevs_discovered": 3, 00:16:18.054 "num_base_bdevs_operational": 3, 00:16:18.054 "base_bdevs_list": [ 00:16:18.054 { 00:16:18.054 "name": null, 00:16:18.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.054 "is_configured": false, 00:16:18.054 "data_offset": 0, 00:16:18.054 "data_size": 65536 00:16:18.054 }, 00:16:18.054 { 00:16:18.054 "name": "BaseBdev2", 00:16:18.054 "uuid": "d86b1d44-0da2-5fed-8a68-85c354aeed02", 00:16:18.054 "is_configured": true, 00:16:18.054 "data_offset": 0, 00:16:18.054 "data_size": 65536 00:16:18.054 }, 00:16:18.054 { 00:16:18.054 "name": "BaseBdev3", 00:16:18.054 "uuid": "f6f077a7-a4e8-55b2-aeeb-71656b58f051", 00:16:18.054 "is_configured": true, 00:16:18.054 "data_offset": 0, 00:16:18.054 "data_size": 65536 00:16:18.054 }, 00:16:18.054 { 00:16:18.054 "name": "BaseBdev4", 00:16:18.054 "uuid": "bc02d31f-02a6-560d-9183-947de37f9b13", 00:16:18.054 "is_configured": true, 00:16:18.054 "data_offset": 0, 00:16:18.054 "data_size": 65536 00:16:18.054 } 00:16:18.054 ] 00:16:18.054 }' 00:16:18.054 04:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.054 04:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.054 [2024-11-27 04:33:14.430528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:18.054 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:18.054 Zero copy mechanism will not be used. 00:16:18.054 Running I/O for 60 seconds... 00:16:18.313 04:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:18.313 04:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.313 04:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.313 [2024-11-27 04:33:14.751575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:18.313 04:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.313 04:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:18.313 [2024-11-27 04:33:14.797188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:18.313 [2024-11-27 04:33:14.799284] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:18.574 [2024-11-27 04:33:14.918919] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:18.574 [2024-11-27 04:33:15.037210] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:18.574 [2024-11-27 04:33:15.037673] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:19.400 162.00 IOPS, 486.00 MiB/s [2024-11-27T04:33:15.987Z] [2024-11-27 04:33:15.750010] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:19.400 04:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.400 04:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.400 04:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.400 04:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.400 04:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.400 04:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.400 04:33:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.400 04:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.400 04:33:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.400 04:33:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.400 04:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.400 "name": "raid_bdev1", 00:16:19.400 "uuid": "870eacea-abc6-41c3-bff1-c9cb34e8fdd1", 00:16:19.400 "strip_size_kb": 0, 00:16:19.400 "state": "online", 00:16:19.400 "raid_level": "raid1", 00:16:19.400 "superblock": false, 00:16:19.400 "num_base_bdevs": 4, 00:16:19.400 "num_base_bdevs_discovered": 4, 00:16:19.400 "num_base_bdevs_operational": 4, 00:16:19.400 "process": { 00:16:19.400 "type": "rebuild", 00:16:19.400 "target": "spare", 00:16:19.400 "progress": { 00:16:19.400 "blocks": 14336, 00:16:19.400 "percent": 21 00:16:19.400 } 00:16:19.400 }, 00:16:19.400 "base_bdevs_list": [ 00:16:19.400 { 00:16:19.400 "name": "spare", 00:16:19.400 "uuid": "d13874e3-6a82-5833-abce-d22896b5e633", 00:16:19.400 "is_configured": true, 00:16:19.400 "data_offset": 0, 00:16:19.400 "data_size": 65536 00:16:19.400 }, 00:16:19.400 { 00:16:19.400 "name": "BaseBdev2", 00:16:19.400 "uuid": "d86b1d44-0da2-5fed-8a68-85c354aeed02", 00:16:19.400 "is_configured": true, 00:16:19.400 "data_offset": 0, 00:16:19.400 "data_size": 65536 00:16:19.400 }, 00:16:19.400 { 00:16:19.400 "name": "BaseBdev3", 00:16:19.400 "uuid": "f6f077a7-a4e8-55b2-aeeb-71656b58f051", 00:16:19.400 "is_configured": true, 00:16:19.400 "data_offset": 0, 00:16:19.400 "data_size": 65536 00:16:19.400 }, 00:16:19.400 { 00:16:19.400 "name": "BaseBdev4", 00:16:19.400 "uuid": "bc02d31f-02a6-560d-9183-947de37f9b13", 00:16:19.400 "is_configured": true, 00:16:19.400 "data_offset": 0, 00:16:19.400 "data_size": 65536 00:16:19.400 } 00:16:19.400 ] 00:16:19.400 }' 00:16:19.400 04:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.400 04:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.400 04:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.400 04:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.400 04:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:19.400 04:33:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.400 04:33:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.400 [2024-11-27 04:33:15.952500] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:19.659 [2024-11-27 04:33:16.016352] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:19.659 [2024-11-27 04:33:16.019208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.659 [2024-11-27 04:33:16.019306] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:19.659 [2024-11-27 04:33:16.019330] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:19.659 [2024-11-27 04:33:16.046069] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:16:19.659 04:33:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.659 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:19.659 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.659 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.659 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:19.659 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:19.659 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.659 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.659 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.659 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.659 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.659 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.659 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.659 04:33:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.659 04:33:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.659 04:33:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.659 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.659 "name": "raid_bdev1", 00:16:19.659 "uuid": "870eacea-abc6-41c3-bff1-c9cb34e8fdd1", 00:16:19.659 "strip_size_kb": 0, 00:16:19.659 "state": "online", 00:16:19.659 "raid_level": "raid1", 00:16:19.659 "superblock": false, 00:16:19.659 "num_base_bdevs": 4, 00:16:19.659 "num_base_bdevs_discovered": 3, 00:16:19.659 "num_base_bdevs_operational": 3, 00:16:19.659 "base_bdevs_list": [ 00:16:19.659 { 00:16:19.659 "name": null, 00:16:19.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.659 "is_configured": false, 00:16:19.659 "data_offset": 0, 00:16:19.659 "data_size": 65536 00:16:19.659 }, 00:16:19.659 { 00:16:19.659 "name": "BaseBdev2", 00:16:19.659 "uuid": "d86b1d44-0da2-5fed-8a68-85c354aeed02", 00:16:19.659 "is_configured": true, 00:16:19.659 "data_offset": 0, 00:16:19.659 "data_size": 65536 00:16:19.659 }, 00:16:19.659 { 00:16:19.659 "name": "BaseBdev3", 00:16:19.659 "uuid": "f6f077a7-a4e8-55b2-aeeb-71656b58f051", 00:16:19.659 "is_configured": true, 00:16:19.659 "data_offset": 0, 00:16:19.659 "data_size": 65536 00:16:19.659 }, 00:16:19.659 { 00:16:19.659 "name": "BaseBdev4", 00:16:19.659 "uuid": "bc02d31f-02a6-560d-9183-947de37f9b13", 00:16:19.659 "is_configured": true, 00:16:19.659 "data_offset": 0, 00:16:19.659 "data_size": 65536 00:16:19.659 } 00:16:19.659 ] 00:16:19.659 }' 00:16:19.659 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.659 04:33:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.177 159.00 IOPS, 477.00 MiB/s [2024-11-27T04:33:16.764Z] 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:20.177 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.177 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:20.177 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:20.177 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.177 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.177 04:33:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.177 04:33:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.177 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.177 04:33:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.177 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.177 "name": "raid_bdev1", 00:16:20.177 "uuid": "870eacea-abc6-41c3-bff1-c9cb34e8fdd1", 00:16:20.177 "strip_size_kb": 0, 00:16:20.177 "state": "online", 00:16:20.177 "raid_level": "raid1", 00:16:20.177 "superblock": false, 00:16:20.177 "num_base_bdevs": 4, 00:16:20.177 "num_base_bdevs_discovered": 3, 00:16:20.177 "num_base_bdevs_operational": 3, 00:16:20.177 "base_bdevs_list": [ 00:16:20.177 { 00:16:20.177 "name": null, 00:16:20.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.177 "is_configured": false, 00:16:20.177 "data_offset": 0, 00:16:20.177 "data_size": 65536 00:16:20.177 }, 00:16:20.177 { 00:16:20.177 "name": "BaseBdev2", 00:16:20.177 "uuid": "d86b1d44-0da2-5fed-8a68-85c354aeed02", 00:16:20.177 "is_configured": true, 00:16:20.177 "data_offset": 0, 00:16:20.177 "data_size": 65536 00:16:20.177 }, 00:16:20.177 { 00:16:20.177 "name": "BaseBdev3", 00:16:20.177 "uuid": "f6f077a7-a4e8-55b2-aeeb-71656b58f051", 00:16:20.177 "is_configured": true, 00:16:20.177 "data_offset": 0, 00:16:20.177 "data_size": 65536 00:16:20.177 }, 00:16:20.177 { 00:16:20.177 "name": "BaseBdev4", 00:16:20.177 "uuid": "bc02d31f-02a6-560d-9183-947de37f9b13", 00:16:20.177 "is_configured": true, 00:16:20.177 "data_offset": 0, 00:16:20.177 "data_size": 65536 00:16:20.177 } 00:16:20.177 ] 00:16:20.177 }' 00:16:20.177 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.177 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:20.177 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.177 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:20.177 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:20.177 04:33:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.177 04:33:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.177 [2024-11-27 04:33:16.701150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:20.177 04:33:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.177 04:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:20.459 [2024-11-27 04:33:16.784371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:20.459 [2024-11-27 04:33:16.786624] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:20.459 [2024-11-27 04:33:16.903354] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:20.459 [2024-11-27 04:33:16.904889] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:20.720 [2024-11-27 04:33:17.148050] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:20.720 [2024-11-27 04:33:17.148451] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:20.979 153.00 IOPS, 459.00 MiB/s [2024-11-27T04:33:17.566Z] [2024-11-27 04:33:17.500702] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:21.238 [2024-11-27 04:33:17.635349] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:21.238 04:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.238 04:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.238 04:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.238 04:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.238 04:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.238 04:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.238 04:33:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.238 04:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.238 04:33:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.238 04:33:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.238 04:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.238 "name": "raid_bdev1", 00:16:21.238 "uuid": "870eacea-abc6-41c3-bff1-c9cb34e8fdd1", 00:16:21.238 "strip_size_kb": 0, 00:16:21.238 "state": "online", 00:16:21.238 "raid_level": "raid1", 00:16:21.238 "superblock": false, 00:16:21.238 "num_base_bdevs": 4, 00:16:21.238 "num_base_bdevs_discovered": 4, 00:16:21.238 "num_base_bdevs_operational": 4, 00:16:21.238 "process": { 00:16:21.238 "type": "rebuild", 00:16:21.238 "target": "spare", 00:16:21.238 "progress": { 00:16:21.238 "blocks": 10240, 00:16:21.238 "percent": 15 00:16:21.238 } 00:16:21.238 }, 00:16:21.238 "base_bdevs_list": [ 00:16:21.238 { 00:16:21.238 "name": "spare", 00:16:21.238 "uuid": "d13874e3-6a82-5833-abce-d22896b5e633", 00:16:21.238 "is_configured": true, 00:16:21.238 "data_offset": 0, 00:16:21.238 "data_size": 65536 00:16:21.238 }, 00:16:21.238 { 00:16:21.238 "name": "BaseBdev2", 00:16:21.238 "uuid": "d86b1d44-0da2-5fed-8a68-85c354aeed02", 00:16:21.238 "is_configured": true, 00:16:21.238 "data_offset": 0, 00:16:21.238 "data_size": 65536 00:16:21.238 }, 00:16:21.238 { 00:16:21.238 "name": "BaseBdev3", 00:16:21.238 "uuid": "f6f077a7-a4e8-55b2-aeeb-71656b58f051", 00:16:21.238 "is_configured": true, 00:16:21.238 "data_offset": 0, 00:16:21.238 "data_size": 65536 00:16:21.238 }, 00:16:21.238 { 00:16:21.238 "name": "BaseBdev4", 00:16:21.238 "uuid": "bc02d31f-02a6-560d-9183-947de37f9b13", 00:16:21.238 "is_configured": true, 00:16:21.238 "data_offset": 0, 00:16:21.238 "data_size": 65536 00:16:21.238 } 00:16:21.238 ] 00:16:21.238 }' 00:16:21.238 04:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.497 04:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.497 04:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.497 04:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.497 04:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:21.497 04:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:21.497 04:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:21.497 04:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:21.497 04:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:21.497 04:33:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.497 04:33:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.497 [2024-11-27 04:33:17.895971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:21.497 [2024-11-27 04:33:17.998216] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:21.497 [2024-11-27 04:33:17.998734] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:21.497 [2024-11-27 04:33:18.005098] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:21.497 [2024-11-27 04:33:18.005162] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:16:21.497 04:33:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.497 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:21.497 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:21.497 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.497 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.497 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.497 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.497 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.497 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.497 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.497 04:33:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.497 04:33:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.497 04:33:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.497 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.497 "name": "raid_bdev1", 00:16:21.497 "uuid": "870eacea-abc6-41c3-bff1-c9cb34e8fdd1", 00:16:21.497 "strip_size_kb": 0, 00:16:21.497 "state": "online", 00:16:21.497 "raid_level": "raid1", 00:16:21.497 "superblock": false, 00:16:21.497 "num_base_bdevs": 4, 00:16:21.497 "num_base_bdevs_discovered": 3, 00:16:21.497 "num_base_bdevs_operational": 3, 00:16:21.497 "process": { 00:16:21.497 "type": "rebuild", 00:16:21.497 "target": "spare", 00:16:21.497 "progress": { 00:16:21.497 "blocks": 14336, 00:16:21.497 "percent": 21 00:16:21.497 } 00:16:21.497 }, 00:16:21.497 "base_bdevs_list": [ 00:16:21.497 { 00:16:21.497 "name": "spare", 00:16:21.497 "uuid": "d13874e3-6a82-5833-abce-d22896b5e633", 00:16:21.497 "is_configured": true, 00:16:21.497 "data_offset": 0, 00:16:21.497 "data_size": 65536 00:16:21.497 }, 00:16:21.497 { 00:16:21.497 "name": null, 00:16:21.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.497 "is_configured": false, 00:16:21.497 "data_offset": 0, 00:16:21.497 "data_size": 65536 00:16:21.497 }, 00:16:21.497 { 00:16:21.497 "name": "BaseBdev3", 00:16:21.497 "uuid": "f6f077a7-a4e8-55b2-aeeb-71656b58f051", 00:16:21.497 "is_configured": true, 00:16:21.497 "data_offset": 0, 00:16:21.497 "data_size": 65536 00:16:21.497 }, 00:16:21.497 { 00:16:21.497 "name": "BaseBdev4", 00:16:21.497 "uuid": "bc02d31f-02a6-560d-9183-947de37f9b13", 00:16:21.497 "is_configured": true, 00:16:21.497 "data_offset": 0, 00:16:21.497 "data_size": 65536 00:16:21.497 } 00:16:21.497 ] 00:16:21.497 }' 00:16:21.497 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.756 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.756 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.756 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.756 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=506 00:16:21.756 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:21.756 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.756 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.756 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.756 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.756 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.756 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.756 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.756 04:33:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.756 04:33:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.756 04:33:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.756 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.756 "name": "raid_bdev1", 00:16:21.756 "uuid": "870eacea-abc6-41c3-bff1-c9cb34e8fdd1", 00:16:21.756 "strip_size_kb": 0, 00:16:21.756 "state": "online", 00:16:21.756 "raid_level": "raid1", 00:16:21.756 "superblock": false, 00:16:21.756 "num_base_bdevs": 4, 00:16:21.756 "num_base_bdevs_discovered": 3, 00:16:21.756 "num_base_bdevs_operational": 3, 00:16:21.756 "process": { 00:16:21.756 "type": "rebuild", 00:16:21.756 "target": "spare", 00:16:21.756 "progress": { 00:16:21.756 "blocks": 16384, 00:16:21.756 "percent": 25 00:16:21.756 } 00:16:21.756 }, 00:16:21.756 "base_bdevs_list": [ 00:16:21.756 { 00:16:21.756 "name": "spare", 00:16:21.756 "uuid": "d13874e3-6a82-5833-abce-d22896b5e633", 00:16:21.756 "is_configured": true, 00:16:21.756 "data_offset": 0, 00:16:21.756 "data_size": 65536 00:16:21.756 }, 00:16:21.756 { 00:16:21.756 "name": null, 00:16:21.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.756 "is_configured": false, 00:16:21.756 "data_offset": 0, 00:16:21.756 "data_size": 65536 00:16:21.756 }, 00:16:21.756 { 00:16:21.756 "name": "BaseBdev3", 00:16:21.756 "uuid": "f6f077a7-a4e8-55b2-aeeb-71656b58f051", 00:16:21.756 "is_configured": true, 00:16:21.756 "data_offset": 0, 00:16:21.756 "data_size": 65536 00:16:21.756 }, 00:16:21.756 { 00:16:21.756 "name": "BaseBdev4", 00:16:21.756 "uuid": "bc02d31f-02a6-560d-9183-947de37f9b13", 00:16:21.756 "is_configured": true, 00:16:21.756 "data_offset": 0, 00:16:21.756 "data_size": 65536 00:16:21.756 } 00:16:21.756 ] 00:16:21.756 }' 00:16:21.756 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.756 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.756 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.756 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.756 04:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:22.015 [2024-11-27 04:33:18.352443] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:16:22.015 133.00 IOPS, 399.00 MiB/s [2024-11-27T04:33:18.602Z] [2024-11-27 04:33:18.577831] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:22.951 04:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:22.951 04:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.951 04:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.951 04:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.951 04:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.951 04:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.951 04:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.951 04:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.951 04:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.951 04:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.951 04:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.951 [2024-11-27 04:33:19.345858] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:22.951 04:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.951 "name": "raid_bdev1", 00:16:22.951 "uuid": "870eacea-abc6-41c3-bff1-c9cb34e8fdd1", 00:16:22.951 "strip_size_kb": 0, 00:16:22.951 "state": "online", 00:16:22.951 "raid_level": "raid1", 00:16:22.951 "superblock": false, 00:16:22.951 "num_base_bdevs": 4, 00:16:22.951 "num_base_bdevs_discovered": 3, 00:16:22.951 "num_base_bdevs_operational": 3, 00:16:22.951 "process": { 00:16:22.951 "type": "rebuild", 00:16:22.951 "target": "spare", 00:16:22.951 "progress": { 00:16:22.951 "blocks": 32768, 00:16:22.951 "percent": 50 00:16:22.951 } 00:16:22.951 }, 00:16:22.951 "base_bdevs_list": [ 00:16:22.951 { 00:16:22.951 "name": "spare", 00:16:22.951 "uuid": "d13874e3-6a82-5833-abce-d22896b5e633", 00:16:22.951 "is_configured": true, 00:16:22.951 "data_offset": 0, 00:16:22.951 "data_size": 65536 00:16:22.951 }, 00:16:22.951 { 00:16:22.951 "name": null, 00:16:22.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.951 "is_configured": false, 00:16:22.951 "data_offset": 0, 00:16:22.951 "data_size": 65536 00:16:22.951 }, 00:16:22.951 { 00:16:22.951 "name": "BaseBdev3", 00:16:22.951 "uuid": "f6f077a7-a4e8-55b2-aeeb-71656b58f051", 00:16:22.951 "is_configured": true, 00:16:22.951 "data_offset": 0, 00:16:22.951 "data_size": 65536 00:16:22.951 }, 00:16:22.951 { 00:16:22.951 "name": "BaseBdev4", 00:16:22.951 "uuid": "bc02d31f-02a6-560d-9183-947de37f9b13", 00:16:22.951 "is_configured": true, 00:16:22.951 "data_offset": 0, 00:16:22.951 "data_size": 65536 00:16:22.951 } 00:16:22.951 ] 00:16:22.951 }' 00:16:22.951 04:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.951 04:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.951 04:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.951 114.40 IOPS, 343.20 MiB/s [2024-11-27T04:33:19.538Z] 04:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.951 04:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:23.211 [2024-11-27 04:33:19.658767] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:23.211 [2024-11-27 04:33:19.776019] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:23.470 [2024-11-27 04:33:20.021198] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:16:24.037 103.33 IOPS, 310.00 MiB/s [2024-11-27T04:33:20.624Z] 04:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:24.037 04:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:24.037 04:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.037 04:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:24.037 04:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:24.037 04:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.037 04:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.037 04:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.037 04:33:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.037 04:33:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.037 [2024-11-27 04:33:20.469395] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:16:24.037 04:33:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.037 04:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.037 "name": "raid_bdev1", 00:16:24.037 "uuid": "870eacea-abc6-41c3-bff1-c9cb34e8fdd1", 00:16:24.037 "strip_size_kb": 0, 00:16:24.037 "state": "online", 00:16:24.038 "raid_level": "raid1", 00:16:24.038 "superblock": false, 00:16:24.038 "num_base_bdevs": 4, 00:16:24.038 "num_base_bdevs_discovered": 3, 00:16:24.038 "num_base_bdevs_operational": 3, 00:16:24.038 "process": { 00:16:24.038 "type": "rebuild", 00:16:24.038 "target": "spare", 00:16:24.038 "progress": { 00:16:24.038 "blocks": 49152, 00:16:24.038 "percent": 75 00:16:24.038 } 00:16:24.038 }, 00:16:24.038 "base_bdevs_list": [ 00:16:24.038 { 00:16:24.038 "name": "spare", 00:16:24.038 "uuid": "d13874e3-6a82-5833-abce-d22896b5e633", 00:16:24.038 "is_configured": true, 00:16:24.038 "data_offset": 0, 00:16:24.038 "data_size": 65536 00:16:24.038 }, 00:16:24.038 { 00:16:24.038 "name": null, 00:16:24.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.038 "is_configured": false, 00:16:24.038 "data_offset": 0, 00:16:24.038 "data_size": 65536 00:16:24.038 }, 00:16:24.038 { 00:16:24.038 "name": "BaseBdev3", 00:16:24.038 "uuid": "f6f077a7-a4e8-55b2-aeeb-71656b58f051", 00:16:24.038 "is_configured": true, 00:16:24.038 "data_offset": 0, 00:16:24.038 "data_size": 65536 00:16:24.038 }, 00:16:24.038 { 00:16:24.038 "name": "BaseBdev4", 00:16:24.038 "uuid": "bc02d31f-02a6-560d-9183-947de37f9b13", 00:16:24.038 "is_configured": true, 00:16:24.038 "data_offset": 0, 00:16:24.038 "data_size": 65536 00:16:24.038 } 00:16:24.038 ] 00:16:24.038 }' 00:16:24.038 04:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.038 04:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:24.038 04:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.038 04:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:24.038 04:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:24.975 [2024-11-27 04:33:21.224466] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:24.975 [2024-11-27 04:33:21.324334] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:24.975 [2024-11-27 04:33:21.333646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.235 92.57 IOPS, 277.71 MiB/s [2024-11-27T04:33:21.822Z] 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:25.235 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.235 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.236 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.236 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.236 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.236 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.236 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.236 04:33:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.236 04:33:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.236 04:33:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.236 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.236 "name": "raid_bdev1", 00:16:25.236 "uuid": "870eacea-abc6-41c3-bff1-c9cb34e8fdd1", 00:16:25.236 "strip_size_kb": 0, 00:16:25.236 "state": "online", 00:16:25.236 "raid_level": "raid1", 00:16:25.236 "superblock": false, 00:16:25.236 "num_base_bdevs": 4, 00:16:25.236 "num_base_bdevs_discovered": 3, 00:16:25.236 "num_base_bdevs_operational": 3, 00:16:25.236 "base_bdevs_list": [ 00:16:25.236 { 00:16:25.236 "name": "spare", 00:16:25.236 "uuid": "d13874e3-6a82-5833-abce-d22896b5e633", 00:16:25.236 "is_configured": true, 00:16:25.236 "data_offset": 0, 00:16:25.236 "data_size": 65536 00:16:25.236 }, 00:16:25.236 { 00:16:25.236 "name": null, 00:16:25.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.236 "is_configured": false, 00:16:25.236 "data_offset": 0, 00:16:25.236 "data_size": 65536 00:16:25.236 }, 00:16:25.236 { 00:16:25.236 "name": "BaseBdev3", 00:16:25.236 "uuid": "f6f077a7-a4e8-55b2-aeeb-71656b58f051", 00:16:25.236 "is_configured": true, 00:16:25.236 "data_offset": 0, 00:16:25.236 "data_size": 65536 00:16:25.236 }, 00:16:25.236 { 00:16:25.236 "name": "BaseBdev4", 00:16:25.236 "uuid": "bc02d31f-02a6-560d-9183-947de37f9b13", 00:16:25.236 "is_configured": true, 00:16:25.236 "data_offset": 0, 00:16:25.236 "data_size": 65536 00:16:25.236 } 00:16:25.236 ] 00:16:25.236 }' 00:16:25.236 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.236 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:25.236 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.236 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:25.236 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:16:25.236 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:25.236 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.236 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:25.236 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:25.236 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.236 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.236 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.236 04:33:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.236 04:33:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.236 04:33:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.236 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.236 "name": "raid_bdev1", 00:16:25.236 "uuid": "870eacea-abc6-41c3-bff1-c9cb34e8fdd1", 00:16:25.236 "strip_size_kb": 0, 00:16:25.236 "state": "online", 00:16:25.236 "raid_level": "raid1", 00:16:25.236 "superblock": false, 00:16:25.236 "num_base_bdevs": 4, 00:16:25.236 "num_base_bdevs_discovered": 3, 00:16:25.236 "num_base_bdevs_operational": 3, 00:16:25.236 "base_bdevs_list": [ 00:16:25.236 { 00:16:25.236 "name": "spare", 00:16:25.236 "uuid": "d13874e3-6a82-5833-abce-d22896b5e633", 00:16:25.236 "is_configured": true, 00:16:25.236 "data_offset": 0, 00:16:25.236 "data_size": 65536 00:16:25.236 }, 00:16:25.236 { 00:16:25.236 "name": null, 00:16:25.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.236 "is_configured": false, 00:16:25.236 "data_offset": 0, 00:16:25.236 "data_size": 65536 00:16:25.236 }, 00:16:25.236 { 00:16:25.236 "name": "BaseBdev3", 00:16:25.236 "uuid": "f6f077a7-a4e8-55b2-aeeb-71656b58f051", 00:16:25.236 "is_configured": true, 00:16:25.236 "data_offset": 0, 00:16:25.236 "data_size": 65536 00:16:25.236 }, 00:16:25.236 { 00:16:25.236 "name": "BaseBdev4", 00:16:25.236 "uuid": "bc02d31f-02a6-560d-9183-947de37f9b13", 00:16:25.236 "is_configured": true, 00:16:25.236 "data_offset": 0, 00:16:25.236 "data_size": 65536 00:16:25.236 } 00:16:25.236 ] 00:16:25.236 }' 00:16:25.236 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.496 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:25.496 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.496 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:25.496 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:25.496 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.496 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.496 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.496 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.496 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.496 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.496 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.496 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.496 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.496 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.496 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.496 04:33:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.496 04:33:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.496 04:33:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.496 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.496 "name": "raid_bdev1", 00:16:25.496 "uuid": "870eacea-abc6-41c3-bff1-c9cb34e8fdd1", 00:16:25.496 "strip_size_kb": 0, 00:16:25.496 "state": "online", 00:16:25.496 "raid_level": "raid1", 00:16:25.496 "superblock": false, 00:16:25.496 "num_base_bdevs": 4, 00:16:25.496 "num_base_bdevs_discovered": 3, 00:16:25.496 "num_base_bdevs_operational": 3, 00:16:25.496 "base_bdevs_list": [ 00:16:25.496 { 00:16:25.496 "name": "spare", 00:16:25.496 "uuid": "d13874e3-6a82-5833-abce-d22896b5e633", 00:16:25.496 "is_configured": true, 00:16:25.496 "data_offset": 0, 00:16:25.496 "data_size": 65536 00:16:25.496 }, 00:16:25.496 { 00:16:25.496 "name": null, 00:16:25.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.496 "is_configured": false, 00:16:25.496 "data_offset": 0, 00:16:25.496 "data_size": 65536 00:16:25.496 }, 00:16:25.496 { 00:16:25.496 "name": "BaseBdev3", 00:16:25.496 "uuid": "f6f077a7-a4e8-55b2-aeeb-71656b58f051", 00:16:25.496 "is_configured": true, 00:16:25.496 "data_offset": 0, 00:16:25.496 "data_size": 65536 00:16:25.496 }, 00:16:25.496 { 00:16:25.496 "name": "BaseBdev4", 00:16:25.496 "uuid": "bc02d31f-02a6-560d-9183-947de37f9b13", 00:16:25.496 "is_configured": true, 00:16:25.496 "data_offset": 0, 00:16:25.496 "data_size": 65536 00:16:25.496 } 00:16:25.496 ] 00:16:25.496 }' 00:16:25.496 04:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.496 04:33:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.755 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:25.755 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.755 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.755 [2024-11-27 04:33:22.297339] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:25.755 [2024-11-27 04:33:22.297435] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:25.755 00:16:25.755 Latency(us) 00:16:25.755 [2024-11-27T04:33:22.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.755 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:25.755 raid_bdev1 : 7.90 86.54 259.63 0.00 0.00 16862.64 366.67 119968.08 00:16:25.755 [2024-11-27T04:33:22.342Z] =================================================================================================================== 00:16:25.755 [2024-11-27T04:33:22.342Z] Total : 86.54 259.63 0.00 0.00 16862.64 366.67 119968.08 00:16:26.017 [2024-11-27 04:33:22.344291] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.017 [2024-11-27 04:33:22.344448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.017 [2024-11-27 04:33:22.344582] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:26.017 [2024-11-27 04:33:22.344664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:26.017 { 00:16:26.017 "results": [ 00:16:26.017 { 00:16:26.017 "job": "raid_bdev1", 00:16:26.017 "core_mask": "0x1", 00:16:26.017 "workload": "randrw", 00:16:26.017 "percentage": 50, 00:16:26.017 "status": "finished", 00:16:26.017 "queue_depth": 2, 00:16:26.017 "io_size": 3145728, 00:16:26.017 "runtime": 7.903683, 00:16:26.017 "iops": 86.5419324130282, 00:16:26.017 "mibps": 259.6257972390846, 00:16:26.017 "io_failed": 0, 00:16:26.017 "io_timeout": 0, 00:16:26.017 "avg_latency_us": 16862.64070073291, 00:16:26.017 "min_latency_us": 366.67248908296943, 00:16:26.017 "max_latency_us": 119968.08384279476 00:16:26.017 } 00:16:26.017 ], 00:16:26.017 "core_count": 1 00:16:26.017 } 00:16:26.017 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.017 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.017 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:26.017 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.017 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.017 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.017 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:26.017 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:26.017 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:26.017 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:26.017 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:26.017 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:26.017 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:26.017 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:26.018 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:26.018 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:26.018 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:26.018 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:26.018 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:26.278 /dev/nbd0 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:26.278 1+0 records in 00:16:26.278 1+0 records out 00:16:26.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00059168 s, 6.9 MB/s 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:26.278 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:26.537 /dev/nbd1 00:16:26.537 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:26.537 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:26.537 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:26.537 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:26.537 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:26.537 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:26.537 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:26.537 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:26.537 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:26.537 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:26.537 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:26.537 1+0 records in 00:16:26.537 1+0 records out 00:16:26.537 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393538 s, 10.4 MB/s 00:16:26.537 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.537 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:26.537 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.537 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:26.537 04:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:26.537 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:26.537 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:26.537 04:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:26.834 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:27.094 /dev/nbd1 00:16:27.094 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:27.094 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:27.094 04:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:27.094 04:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:27.094 04:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:27.094 04:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:27.094 04:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:27.094 04:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:27.094 04:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:27.094 04:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:27.094 04:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:27.094 1+0 records in 00:16:27.094 1+0 records out 00:16:27.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391987 s, 10.4 MB/s 00:16:27.094 04:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:27.094 04:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:27.094 04:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:27.094 04:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:27.094 04:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:27.094 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:27.094 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:27.094 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:27.353 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:27.353 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:27.353 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:27.353 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:27.353 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:27.353 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:27.353 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:27.613 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:27.613 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:27.613 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:27.613 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:27.613 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:27.613 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:27.613 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:27.613 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:27.613 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:27.613 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:27.613 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:27.613 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:27.613 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:27.613 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:27.613 04:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:27.874 04:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:27.874 04:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:27.874 04:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:27.874 04:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:27.874 04:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:27.874 04:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:27.874 04:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:27.874 04:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:27.874 04:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:27.874 04:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79076 00:16:27.874 04:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79076 ']' 00:16:27.874 04:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79076 00:16:27.874 04:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:16:27.874 04:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.874 04:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79076 00:16:27.874 killing process with pid 79076 00:16:27.874 Received shutdown signal, test time was about 9.834645 seconds 00:16:27.874 00:16:27.874 Latency(us) 00:16:27.874 [2024-11-27T04:33:24.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.874 [2024-11-27T04:33:24.461Z] =================================================================================================================== 00:16:27.874 [2024-11-27T04:33:24.461Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:27.874 04:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:27.874 04:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:27.874 04:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79076' 00:16:27.874 04:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79076 00:16:27.874 [2024-11-27 04:33:24.248467] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:27.874 04:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79076 00:16:28.133 [2024-11-27 04:33:24.712903] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:29.512 04:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:29.512 00:16:29.512 real 0m13.481s 00:16:29.512 user 0m17.092s 00:16:29.512 sys 0m1.817s 00:16:29.512 ************************************ 00:16:29.512 END TEST raid_rebuild_test_io 00:16:29.512 ************************************ 00:16:29.512 04:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.512 04:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.512 04:33:26 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:16:29.512 04:33:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:29.512 04:33:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:29.512 04:33:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:29.512 ************************************ 00:16:29.512 START TEST raid_rebuild_test_sb_io 00:16:29.512 ************************************ 00:16:29.512 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:16:29.512 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:29.512 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:29.512 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:29.512 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:29.512 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:29.512 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:29.512 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:29.512 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:29.512 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:29.512 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:29.512 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:29.512 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:29.512 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:29.512 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:29.512 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:29.512 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:29.513 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:29.513 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:29.513 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:29.513 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:29.513 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:29.513 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:29.513 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:29.513 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:29.513 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:29.513 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:29.513 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:29.513 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:29.513 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:29.513 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:29.513 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79485 00:16:29.513 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:29.513 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79485 00:16:29.513 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79485 ']' 00:16:29.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.513 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.513 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.513 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.513 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.513 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.772 [2024-11-27 04:33:26.138178] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:29.772 [2024-11-27 04:33:26.138381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:29.772 Zero copy mechanism will not be used. 00:16:29.772 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79485 ] 00:16:29.772 [2024-11-27 04:33:26.315049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.031 [2024-11-27 04:33:26.438376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.291 [2024-11-27 04:33:26.649938] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.291 [2024-11-27 04:33:26.650032] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.550 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:30.550 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:16:30.550 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:30.550 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:30.550 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.550 04:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.550 BaseBdev1_malloc 00:16:30.550 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.550 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:30.550 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.550 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.550 [2024-11-27 04:33:27.034307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:30.550 [2024-11-27 04:33:27.034367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.550 [2024-11-27 04:33:27.034387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:30.550 [2024-11-27 04:33:27.034398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.550 [2024-11-27 04:33:27.036605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.550 [2024-11-27 04:33:27.036651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:30.550 BaseBdev1 00:16:30.550 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.550 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:30.550 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:30.550 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.550 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.550 BaseBdev2_malloc 00:16:30.550 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.550 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:30.550 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.550 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.550 [2024-11-27 04:33:27.091232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:30.550 [2024-11-27 04:33:27.091383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.550 [2024-11-27 04:33:27.091414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:30.550 [2024-11-27 04:33:27.091428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.550 [2024-11-27 04:33:27.093918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.550 [2024-11-27 04:33:27.093961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:30.550 BaseBdev2 00:16:30.550 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.550 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:30.550 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:30.550 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.550 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.811 BaseBdev3_malloc 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.811 [2024-11-27 04:33:27.158963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:30.811 [2024-11-27 04:33:27.159022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.811 [2024-11-27 04:33:27.159044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:30.811 [2024-11-27 04:33:27.159055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.811 [2024-11-27 04:33:27.161260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.811 [2024-11-27 04:33:27.161300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:30.811 BaseBdev3 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.811 BaseBdev4_malloc 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.811 [2024-11-27 04:33:27.212952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:30.811 [2024-11-27 04:33:27.213010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.811 [2024-11-27 04:33:27.213031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:30.811 [2024-11-27 04:33:27.213041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.811 [2024-11-27 04:33:27.215101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.811 [2024-11-27 04:33:27.215136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:30.811 BaseBdev4 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.811 spare_malloc 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.811 spare_delay 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.811 [2024-11-27 04:33:27.275210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:30.811 [2024-11-27 04:33:27.275264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.811 [2024-11-27 04:33:27.275283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:30.811 [2024-11-27 04:33:27.275294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.811 [2024-11-27 04:33:27.277346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.811 [2024-11-27 04:33:27.277442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:30.811 spare 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.811 [2024-11-27 04:33:27.283266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.811 [2024-11-27 04:33:27.285144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:30.811 [2024-11-27 04:33:27.285207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:30.811 [2024-11-27 04:33:27.285260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:30.811 [2024-11-27 04:33:27.285435] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:30.811 [2024-11-27 04:33:27.285449] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:30.811 [2024-11-27 04:33:27.285689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:30.811 [2024-11-27 04:33:27.285864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:30.811 [2024-11-27 04:33:27.285873] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:30.811 [2024-11-27 04:33:27.286017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.811 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.812 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.812 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.812 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.812 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.812 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.812 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.812 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.812 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.812 "name": "raid_bdev1", 00:16:30.812 "uuid": "1b6a4ce9-c416-436b-9a98-536bedbd4a35", 00:16:30.812 "strip_size_kb": 0, 00:16:30.812 "state": "online", 00:16:30.812 "raid_level": "raid1", 00:16:30.812 "superblock": true, 00:16:30.812 "num_base_bdevs": 4, 00:16:30.812 "num_base_bdevs_discovered": 4, 00:16:30.812 "num_base_bdevs_operational": 4, 00:16:30.812 "base_bdevs_list": [ 00:16:30.812 { 00:16:30.812 "name": "BaseBdev1", 00:16:30.812 "uuid": "4d542de7-fb5e-599a-8b87-2817b28f0fcd", 00:16:30.812 "is_configured": true, 00:16:30.812 "data_offset": 2048, 00:16:30.812 "data_size": 63488 00:16:30.812 }, 00:16:30.812 { 00:16:30.812 "name": "BaseBdev2", 00:16:30.812 "uuid": "a9513f23-c056-5790-97cb-e2c9babbf89a", 00:16:30.812 "is_configured": true, 00:16:30.812 "data_offset": 2048, 00:16:30.812 "data_size": 63488 00:16:30.812 }, 00:16:30.812 { 00:16:30.812 "name": "BaseBdev3", 00:16:30.812 "uuid": "00edac8f-5921-5f67-83c4-ae49e96b601b", 00:16:30.812 "is_configured": true, 00:16:30.812 "data_offset": 2048, 00:16:30.812 "data_size": 63488 00:16:30.812 }, 00:16:30.812 { 00:16:30.812 "name": "BaseBdev4", 00:16:30.812 "uuid": "ea650bb7-ba20-5b8e-81b0-97f90d999b92", 00:16:30.812 "is_configured": true, 00:16:30.812 "data_offset": 2048, 00:16:30.812 "data_size": 63488 00:16:30.812 } 00:16:30.812 ] 00:16:30.812 }' 00:16:30.812 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.812 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.381 [2024-11-27 04:33:27.758828] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.381 [2024-11-27 04:33:27.850279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.381 "name": "raid_bdev1", 00:16:31.381 "uuid": "1b6a4ce9-c416-436b-9a98-536bedbd4a35", 00:16:31.381 "strip_size_kb": 0, 00:16:31.381 "state": "online", 00:16:31.381 "raid_level": "raid1", 00:16:31.381 "superblock": true, 00:16:31.381 "num_base_bdevs": 4, 00:16:31.381 "num_base_bdevs_discovered": 3, 00:16:31.381 "num_base_bdevs_operational": 3, 00:16:31.381 "base_bdevs_list": [ 00:16:31.381 { 00:16:31.381 "name": null, 00:16:31.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.381 "is_configured": false, 00:16:31.381 "data_offset": 0, 00:16:31.381 "data_size": 63488 00:16:31.381 }, 00:16:31.381 { 00:16:31.381 "name": "BaseBdev2", 00:16:31.381 "uuid": "a9513f23-c056-5790-97cb-e2c9babbf89a", 00:16:31.381 "is_configured": true, 00:16:31.381 "data_offset": 2048, 00:16:31.381 "data_size": 63488 00:16:31.381 }, 00:16:31.381 { 00:16:31.381 "name": "BaseBdev3", 00:16:31.381 "uuid": "00edac8f-5921-5f67-83c4-ae49e96b601b", 00:16:31.381 "is_configured": true, 00:16:31.381 "data_offset": 2048, 00:16:31.381 "data_size": 63488 00:16:31.381 }, 00:16:31.381 { 00:16:31.381 "name": "BaseBdev4", 00:16:31.381 "uuid": "ea650bb7-ba20-5b8e-81b0-97f90d999b92", 00:16:31.381 "is_configured": true, 00:16:31.381 "data_offset": 2048, 00:16:31.381 "data_size": 63488 00:16:31.381 } 00:16:31.381 ] 00:16:31.381 }' 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.381 04:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.381 [2024-11-27 04:33:27.938635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:31.381 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:31.381 Zero copy mechanism will not be used. 00:16:31.381 Running I/O for 60 seconds... 00:16:31.707 04:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:31.707 04:33:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.707 04:33:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.995 [2024-11-27 04:33:28.275916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:31.995 04:33:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.995 04:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:31.995 [2024-11-27 04:33:28.364123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:31.995 [2024-11-27 04:33:28.366407] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:31.995 [2024-11-27 04:33:28.468086] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:31.995 [2024-11-27 04:33:28.468790] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:32.254 [2024-11-27 04:33:28.686232] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:32.254 [2024-11-27 04:33:28.687132] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:32.771 132.00 IOPS, 396.00 MiB/s [2024-11-27T04:33:29.358Z] [2024-11-27 04:33:29.166417] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:32.771 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.771 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.771 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.771 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.771 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.771 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.771 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.771 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.771 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.030 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.030 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.030 "name": "raid_bdev1", 00:16:33.030 "uuid": "1b6a4ce9-c416-436b-9a98-536bedbd4a35", 00:16:33.030 "strip_size_kb": 0, 00:16:33.030 "state": "online", 00:16:33.030 "raid_level": "raid1", 00:16:33.030 "superblock": true, 00:16:33.030 "num_base_bdevs": 4, 00:16:33.030 "num_base_bdevs_discovered": 4, 00:16:33.030 "num_base_bdevs_operational": 4, 00:16:33.030 "process": { 00:16:33.030 "type": "rebuild", 00:16:33.030 "target": "spare", 00:16:33.030 "progress": { 00:16:33.030 "blocks": 10240, 00:16:33.030 "percent": 16 00:16:33.030 } 00:16:33.030 }, 00:16:33.030 "base_bdevs_list": [ 00:16:33.030 { 00:16:33.030 "name": "spare", 00:16:33.030 "uuid": "6bffb84f-7cdf-5044-9bd2-4f59564a8a95", 00:16:33.030 "is_configured": true, 00:16:33.030 "data_offset": 2048, 00:16:33.030 "data_size": 63488 00:16:33.030 }, 00:16:33.030 { 00:16:33.030 "name": "BaseBdev2", 00:16:33.030 "uuid": "a9513f23-c056-5790-97cb-e2c9babbf89a", 00:16:33.030 "is_configured": true, 00:16:33.030 "data_offset": 2048, 00:16:33.030 "data_size": 63488 00:16:33.030 }, 00:16:33.030 { 00:16:33.030 "name": "BaseBdev3", 00:16:33.030 "uuid": "00edac8f-5921-5f67-83c4-ae49e96b601b", 00:16:33.030 "is_configured": true, 00:16:33.030 "data_offset": 2048, 00:16:33.030 "data_size": 63488 00:16:33.030 }, 00:16:33.030 { 00:16:33.030 "name": "BaseBdev4", 00:16:33.030 "uuid": "ea650bb7-ba20-5b8e-81b0-97f90d999b92", 00:16:33.030 "is_configured": true, 00:16:33.030 "data_offset": 2048, 00:16:33.030 "data_size": 63488 00:16:33.030 } 00:16:33.030 ] 00:16:33.030 }' 00:16:33.030 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.030 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.030 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.030 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.030 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:33.030 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.030 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.030 [2024-11-27 04:33:29.472875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:33.030 [2024-11-27 04:33:29.524398] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:33.030 [2024-11-27 04:33:29.533441] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:33.030 [2024-11-27 04:33:29.543234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.030 [2024-11-27 04:33:29.543357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:33.030 [2024-11-27 04:33:29.543379] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:33.030 [2024-11-27 04:33:29.572755] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:16:33.030 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.030 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:33.030 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.030 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.030 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:33.030 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:33.030 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.030 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.030 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.030 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.030 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.030 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.030 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.030 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.030 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.289 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.289 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.289 "name": "raid_bdev1", 00:16:33.289 "uuid": "1b6a4ce9-c416-436b-9a98-536bedbd4a35", 00:16:33.289 "strip_size_kb": 0, 00:16:33.289 "state": "online", 00:16:33.289 "raid_level": "raid1", 00:16:33.289 "superblock": true, 00:16:33.289 "num_base_bdevs": 4, 00:16:33.289 "num_base_bdevs_discovered": 3, 00:16:33.289 "num_base_bdevs_operational": 3, 00:16:33.289 "base_bdevs_list": [ 00:16:33.289 { 00:16:33.289 "name": null, 00:16:33.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.289 "is_configured": false, 00:16:33.289 "data_offset": 0, 00:16:33.289 "data_size": 63488 00:16:33.289 }, 00:16:33.289 { 00:16:33.289 "name": "BaseBdev2", 00:16:33.289 "uuid": "a9513f23-c056-5790-97cb-e2c9babbf89a", 00:16:33.289 "is_configured": true, 00:16:33.289 "data_offset": 2048, 00:16:33.289 "data_size": 63488 00:16:33.289 }, 00:16:33.289 { 00:16:33.289 "name": "BaseBdev3", 00:16:33.289 "uuid": "00edac8f-5921-5f67-83c4-ae49e96b601b", 00:16:33.289 "is_configured": true, 00:16:33.289 "data_offset": 2048, 00:16:33.289 "data_size": 63488 00:16:33.289 }, 00:16:33.289 { 00:16:33.289 "name": "BaseBdev4", 00:16:33.289 "uuid": "ea650bb7-ba20-5b8e-81b0-97f90d999b92", 00:16:33.289 "is_configured": true, 00:16:33.289 "data_offset": 2048, 00:16:33.289 "data_size": 63488 00:16:33.289 } 00:16:33.289 ] 00:16:33.289 }' 00:16:33.289 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.289 04:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.548 130.00 IOPS, 390.00 MiB/s [2024-11-27T04:33:30.135Z] 04:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:33.548 04:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.548 04:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:33.548 04:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:33.548 04:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.548 04:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.548 04:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.548 04:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.548 04:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.548 04:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.548 04:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.548 "name": "raid_bdev1", 00:16:33.548 "uuid": "1b6a4ce9-c416-436b-9a98-536bedbd4a35", 00:16:33.548 "strip_size_kb": 0, 00:16:33.548 "state": "online", 00:16:33.548 "raid_level": "raid1", 00:16:33.548 "superblock": true, 00:16:33.548 "num_base_bdevs": 4, 00:16:33.548 "num_base_bdevs_discovered": 3, 00:16:33.548 "num_base_bdevs_operational": 3, 00:16:33.548 "base_bdevs_list": [ 00:16:33.548 { 00:16:33.548 "name": null, 00:16:33.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.548 "is_configured": false, 00:16:33.548 "data_offset": 0, 00:16:33.548 "data_size": 63488 00:16:33.548 }, 00:16:33.548 { 00:16:33.548 "name": "BaseBdev2", 00:16:33.548 "uuid": "a9513f23-c056-5790-97cb-e2c9babbf89a", 00:16:33.548 "is_configured": true, 00:16:33.548 "data_offset": 2048, 00:16:33.548 "data_size": 63488 00:16:33.548 }, 00:16:33.548 { 00:16:33.548 "name": "BaseBdev3", 00:16:33.548 "uuid": "00edac8f-5921-5f67-83c4-ae49e96b601b", 00:16:33.548 "is_configured": true, 00:16:33.548 "data_offset": 2048, 00:16:33.548 "data_size": 63488 00:16:33.548 }, 00:16:33.548 { 00:16:33.548 "name": "BaseBdev4", 00:16:33.548 "uuid": "ea650bb7-ba20-5b8e-81b0-97f90d999b92", 00:16:33.548 "is_configured": true, 00:16:33.548 "data_offset": 2048, 00:16:33.548 "data_size": 63488 00:16:33.548 } 00:16:33.548 ] 00:16:33.548 }' 00:16:33.548 04:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.808 04:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:33.808 04:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.808 04:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:33.808 04:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:33.808 04:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.808 04:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.808 [2024-11-27 04:33:30.225106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:33.808 04:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.808 04:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:33.808 [2024-11-27 04:33:30.283860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:33.808 [2024-11-27 04:33:30.285969] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:34.068 [2024-11-27 04:33:30.410783] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:34.068 [2024-11-27 04:33:30.411409] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:34.068 [2024-11-27 04:33:30.540363] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:34.068 [2024-11-27 04:33:30.540714] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:34.326 [2024-11-27 04:33:30.875363] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:34.585 134.67 IOPS, 404.00 MiB/s [2024-11-27T04:33:31.172Z] [2024-11-27 04:33:31.102882] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:34.585 [2024-11-27 04:33:31.103768] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:34.844 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.844 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.844 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.844 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.844 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.844 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.844 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.844 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.844 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.844 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.844 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.844 "name": "raid_bdev1", 00:16:34.844 "uuid": "1b6a4ce9-c416-436b-9a98-536bedbd4a35", 00:16:34.844 "strip_size_kb": 0, 00:16:34.844 "state": "online", 00:16:34.844 "raid_level": "raid1", 00:16:34.844 "superblock": true, 00:16:34.844 "num_base_bdevs": 4, 00:16:34.844 "num_base_bdevs_discovered": 4, 00:16:34.844 "num_base_bdevs_operational": 4, 00:16:34.844 "process": { 00:16:34.844 "type": "rebuild", 00:16:34.844 "target": "spare", 00:16:34.844 "progress": { 00:16:34.844 "blocks": 10240, 00:16:34.844 "percent": 16 00:16:34.844 } 00:16:34.844 }, 00:16:34.844 "base_bdevs_list": [ 00:16:34.844 { 00:16:34.844 "name": "spare", 00:16:34.844 "uuid": "6bffb84f-7cdf-5044-9bd2-4f59564a8a95", 00:16:34.844 "is_configured": true, 00:16:34.844 "data_offset": 2048, 00:16:34.844 "data_size": 63488 00:16:34.844 }, 00:16:34.844 { 00:16:34.844 "name": "BaseBdev2", 00:16:34.844 "uuid": "a9513f23-c056-5790-97cb-e2c9babbf89a", 00:16:34.844 "is_configured": true, 00:16:34.844 "data_offset": 2048, 00:16:34.844 "data_size": 63488 00:16:34.844 }, 00:16:34.844 { 00:16:34.844 "name": "BaseBdev3", 00:16:34.844 "uuid": "00edac8f-5921-5f67-83c4-ae49e96b601b", 00:16:34.844 "is_configured": true, 00:16:34.844 "data_offset": 2048, 00:16:34.844 "data_size": 63488 00:16:34.844 }, 00:16:34.844 { 00:16:34.844 "name": "BaseBdev4", 00:16:34.844 "uuid": "ea650bb7-ba20-5b8e-81b0-97f90d999b92", 00:16:34.844 "is_configured": true, 00:16:34.844 "data_offset": 2048, 00:16:34.844 "data_size": 63488 00:16:34.844 } 00:16:34.844 ] 00:16:34.844 }' 00:16:34.844 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.844 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:34.844 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.844 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.844 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:34.844 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:34.844 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:34.844 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:34.844 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:34.844 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:34.844 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:34.845 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.845 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.845 [2024-11-27 04:33:31.425360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:35.154 [2024-11-27 04:33:31.446738] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:35.154 [2024-11-27 04:33:31.554528] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:35.154 [2024-11-27 04:33:31.554567] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:16:35.154 [2024-11-27 04:33:31.556952] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:35.154 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.154 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:35.154 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:35.154 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.154 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.154 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.154 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.154 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.154 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.154 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.154 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.154 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.154 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.154 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.154 "name": "raid_bdev1", 00:16:35.154 "uuid": "1b6a4ce9-c416-436b-9a98-536bedbd4a35", 00:16:35.154 "strip_size_kb": 0, 00:16:35.154 "state": "online", 00:16:35.154 "raid_level": "raid1", 00:16:35.154 "superblock": true, 00:16:35.154 "num_base_bdevs": 4, 00:16:35.154 "num_base_bdevs_discovered": 3, 00:16:35.154 "num_base_bdevs_operational": 3, 00:16:35.154 "process": { 00:16:35.154 "type": "rebuild", 00:16:35.154 "target": "spare", 00:16:35.154 "progress": { 00:16:35.154 "blocks": 14336, 00:16:35.154 "percent": 22 00:16:35.154 } 00:16:35.154 }, 00:16:35.154 "base_bdevs_list": [ 00:16:35.154 { 00:16:35.154 "name": "spare", 00:16:35.154 "uuid": "6bffb84f-7cdf-5044-9bd2-4f59564a8a95", 00:16:35.154 "is_configured": true, 00:16:35.154 "data_offset": 2048, 00:16:35.154 "data_size": 63488 00:16:35.154 }, 00:16:35.154 { 00:16:35.154 "name": null, 00:16:35.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.154 "is_configured": false, 00:16:35.154 "data_offset": 0, 00:16:35.154 "data_size": 63488 00:16:35.154 }, 00:16:35.154 { 00:16:35.154 "name": "BaseBdev3", 00:16:35.154 "uuid": "00edac8f-5921-5f67-83c4-ae49e96b601b", 00:16:35.154 "is_configured": true, 00:16:35.154 "data_offset": 2048, 00:16:35.154 "data_size": 63488 00:16:35.154 }, 00:16:35.154 { 00:16:35.154 "name": "BaseBdev4", 00:16:35.154 "uuid": "ea650bb7-ba20-5b8e-81b0-97f90d999b92", 00:16:35.154 "is_configured": true, 00:16:35.154 "data_offset": 2048, 00:16:35.154 "data_size": 63488 00:16:35.154 } 00:16:35.154 ] 00:16:35.154 }' 00:16:35.154 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.154 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.154 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.154 [2024-11-27 04:33:31.685744] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:35.154 [2024-11-27 04:33:31.686392] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:35.154 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.154 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=519 00:16:35.154 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.154 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.154 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.154 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.154 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.154 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.428 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.428 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.428 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.428 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.428 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.428 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.428 "name": "raid_bdev1", 00:16:35.428 "uuid": "1b6a4ce9-c416-436b-9a98-536bedbd4a35", 00:16:35.428 "strip_size_kb": 0, 00:16:35.428 "state": "online", 00:16:35.428 "raid_level": "raid1", 00:16:35.428 "superblock": true, 00:16:35.428 "num_base_bdevs": 4, 00:16:35.428 "num_base_bdevs_discovered": 3, 00:16:35.428 "num_base_bdevs_operational": 3, 00:16:35.428 "process": { 00:16:35.428 "type": "rebuild", 00:16:35.428 "target": "spare", 00:16:35.428 "progress": { 00:16:35.428 "blocks": 16384, 00:16:35.428 "percent": 25 00:16:35.428 } 00:16:35.428 }, 00:16:35.428 "base_bdevs_list": [ 00:16:35.428 { 00:16:35.428 "name": "spare", 00:16:35.428 "uuid": "6bffb84f-7cdf-5044-9bd2-4f59564a8a95", 00:16:35.428 "is_configured": true, 00:16:35.428 "data_offset": 2048, 00:16:35.428 "data_size": 63488 00:16:35.428 }, 00:16:35.428 { 00:16:35.428 "name": null, 00:16:35.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.428 "is_configured": false, 00:16:35.428 "data_offset": 0, 00:16:35.429 "data_size": 63488 00:16:35.429 }, 00:16:35.429 { 00:16:35.429 "name": "BaseBdev3", 00:16:35.429 "uuid": "00edac8f-5921-5f67-83c4-ae49e96b601b", 00:16:35.429 "is_configured": true, 00:16:35.429 "data_offset": 2048, 00:16:35.429 "data_size": 63488 00:16:35.429 }, 00:16:35.429 { 00:16:35.429 "name": "BaseBdev4", 00:16:35.429 "uuid": "ea650bb7-ba20-5b8e-81b0-97f90d999b92", 00:16:35.429 "is_configured": true, 00:16:35.429 "data_offset": 2048, 00:16:35.429 "data_size": 63488 00:16:35.429 } 00:16:35.429 ] 00:16:35.429 }' 00:16:35.429 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.429 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.429 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.429 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.429 04:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:35.687 115.00 IOPS, 345.00 MiB/s [2024-11-27T04:33:32.274Z] [2024-11-27 04:33:32.177431] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:35.946 [2024-11-27 04:33:32.397855] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:35.946 [2024-11-27 04:33:32.522713] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:36.515 04:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:36.515 04:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.515 04:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.515 04:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.515 04:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.515 04:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.515 04:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.515 04:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.515 04:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.515 04:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:36.515 04:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.515 04:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.515 "name": "raid_bdev1", 00:16:36.515 "uuid": "1b6a4ce9-c416-436b-9a98-536bedbd4a35", 00:16:36.515 "strip_size_kb": 0, 00:16:36.515 "state": "online", 00:16:36.515 "raid_level": "raid1", 00:16:36.515 "superblock": true, 00:16:36.515 "num_base_bdevs": 4, 00:16:36.515 "num_base_bdevs_discovered": 3, 00:16:36.515 "num_base_bdevs_operational": 3, 00:16:36.515 "process": { 00:16:36.515 "type": "rebuild", 00:16:36.515 "target": "spare", 00:16:36.515 "progress": { 00:16:36.515 "blocks": 32768, 00:16:36.515 "percent": 51 00:16:36.515 } 00:16:36.516 }, 00:16:36.516 "base_bdevs_list": [ 00:16:36.516 { 00:16:36.516 "name": "spare", 00:16:36.516 "uuid": "6bffb84f-7cdf-5044-9bd2-4f59564a8a95", 00:16:36.516 "is_configured": true, 00:16:36.516 "data_offset": 2048, 00:16:36.516 "data_size": 63488 00:16:36.516 }, 00:16:36.516 { 00:16:36.516 "name": null, 00:16:36.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.516 "is_configured": false, 00:16:36.516 "data_offset": 0, 00:16:36.516 "data_size": 63488 00:16:36.516 }, 00:16:36.516 { 00:16:36.516 "name": "BaseBdev3", 00:16:36.516 "uuid": "00edac8f-5921-5f67-83c4-ae49e96b601b", 00:16:36.516 "is_configured": true, 00:16:36.516 "data_offset": 2048, 00:16:36.516 "data_size": 63488 00:16:36.516 }, 00:16:36.516 { 00:16:36.516 "name": "BaseBdev4", 00:16:36.516 "uuid": "ea650bb7-ba20-5b8e-81b0-97f90d999b92", 00:16:36.516 "is_configured": true, 00:16:36.516 "data_offset": 2048, 00:16:36.516 "data_size": 63488 00:16:36.516 } 00:16:36.516 ] 00:16:36.516 }' 00:16:36.516 04:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.516 105.20 IOPS, 315.60 MiB/s [2024-11-27T04:33:33.103Z] 04:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:36.516 04:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.516 [2024-11-27 04:33:32.962818] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:36.516 04:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:36.516 04:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:36.776 [2024-11-27 04:33:33.287500] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:36.776 [2024-11-27 04:33:33.288405] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:37.344 [2024-11-27 04:33:33.870609] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:16:37.604 96.00 IOPS, 288.00 MiB/s [2024-11-27T04:33:34.191Z] 04:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:37.604 04:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.604 04:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.604 04:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.604 04:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.604 04:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.604 04:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.604 04:33:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.604 04:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.604 04:33:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.604 04:33:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.604 04:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.604 "name": "raid_bdev1", 00:16:37.604 "uuid": "1b6a4ce9-c416-436b-9a98-536bedbd4a35", 00:16:37.604 "strip_size_kb": 0, 00:16:37.604 "state": "online", 00:16:37.604 "raid_level": "raid1", 00:16:37.604 "superblock": true, 00:16:37.604 "num_base_bdevs": 4, 00:16:37.604 "num_base_bdevs_discovered": 3, 00:16:37.604 "num_base_bdevs_operational": 3, 00:16:37.604 "process": { 00:16:37.604 "type": "rebuild", 00:16:37.604 "target": "spare", 00:16:37.604 "progress": { 00:16:37.604 "blocks": 47104, 00:16:37.604 "percent": 74 00:16:37.604 } 00:16:37.604 }, 00:16:37.604 "base_bdevs_list": [ 00:16:37.604 { 00:16:37.604 "name": "spare", 00:16:37.604 "uuid": "6bffb84f-7cdf-5044-9bd2-4f59564a8a95", 00:16:37.604 "is_configured": true, 00:16:37.604 "data_offset": 2048, 00:16:37.604 "data_size": 63488 00:16:37.604 }, 00:16:37.604 { 00:16:37.604 "name": null, 00:16:37.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.604 "is_configured": false, 00:16:37.604 "data_offset": 0, 00:16:37.604 "data_size": 63488 00:16:37.604 }, 00:16:37.604 { 00:16:37.604 "name": "BaseBdev3", 00:16:37.604 "uuid": "00edac8f-5921-5f67-83c4-ae49e96b601b", 00:16:37.604 "is_configured": true, 00:16:37.604 "data_offset": 2048, 00:16:37.604 "data_size": 63488 00:16:37.604 }, 00:16:37.604 { 00:16:37.604 "name": "BaseBdev4", 00:16:37.604 "uuid": "ea650bb7-ba20-5b8e-81b0-97f90d999b92", 00:16:37.604 "is_configured": true, 00:16:37.604 "data_offset": 2048, 00:16:37.604 "data_size": 63488 00:16:37.604 } 00:16:37.604 ] 00:16:37.604 }' 00:16:37.604 04:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.604 04:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.604 04:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.604 04:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.604 04:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:38.541 [2024-11-27 04:33:34.874914] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:38.541 87.71 IOPS, 263.14 MiB/s [2024-11-27T04:33:35.128Z] [2024-11-27 04:33:34.972863] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:38.541 [2024-11-27 04:33:34.976082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.800 "name": "raid_bdev1", 00:16:38.800 "uuid": "1b6a4ce9-c416-436b-9a98-536bedbd4a35", 00:16:38.800 "strip_size_kb": 0, 00:16:38.800 "state": "online", 00:16:38.800 "raid_level": "raid1", 00:16:38.800 "superblock": true, 00:16:38.800 "num_base_bdevs": 4, 00:16:38.800 "num_base_bdevs_discovered": 3, 00:16:38.800 "num_base_bdevs_operational": 3, 00:16:38.800 "base_bdevs_list": [ 00:16:38.800 { 00:16:38.800 "name": "spare", 00:16:38.800 "uuid": "6bffb84f-7cdf-5044-9bd2-4f59564a8a95", 00:16:38.800 "is_configured": true, 00:16:38.800 "data_offset": 2048, 00:16:38.800 "data_size": 63488 00:16:38.800 }, 00:16:38.800 { 00:16:38.800 "name": null, 00:16:38.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.800 "is_configured": false, 00:16:38.800 "data_offset": 0, 00:16:38.800 "data_size": 63488 00:16:38.800 }, 00:16:38.800 { 00:16:38.800 "name": "BaseBdev3", 00:16:38.800 "uuid": "00edac8f-5921-5f67-83c4-ae49e96b601b", 00:16:38.800 "is_configured": true, 00:16:38.800 "data_offset": 2048, 00:16:38.800 "data_size": 63488 00:16:38.800 }, 00:16:38.800 { 00:16:38.800 "name": "BaseBdev4", 00:16:38.800 "uuid": "ea650bb7-ba20-5b8e-81b0-97f90d999b92", 00:16:38.800 "is_configured": true, 00:16:38.800 "data_offset": 2048, 00:16:38.800 "data_size": 63488 00:16:38.800 } 00:16:38.800 ] 00:16:38.800 }' 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.800 "name": "raid_bdev1", 00:16:38.800 "uuid": "1b6a4ce9-c416-436b-9a98-536bedbd4a35", 00:16:38.800 "strip_size_kb": 0, 00:16:38.800 "state": "online", 00:16:38.800 "raid_level": "raid1", 00:16:38.800 "superblock": true, 00:16:38.800 "num_base_bdevs": 4, 00:16:38.800 "num_base_bdevs_discovered": 3, 00:16:38.800 "num_base_bdevs_operational": 3, 00:16:38.800 "base_bdevs_list": [ 00:16:38.800 { 00:16:38.800 "name": "spare", 00:16:38.800 "uuid": "6bffb84f-7cdf-5044-9bd2-4f59564a8a95", 00:16:38.800 "is_configured": true, 00:16:38.800 "data_offset": 2048, 00:16:38.800 "data_size": 63488 00:16:38.800 }, 00:16:38.800 { 00:16:38.800 "name": null, 00:16:38.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.800 "is_configured": false, 00:16:38.800 "data_offset": 0, 00:16:38.800 "data_size": 63488 00:16:38.800 }, 00:16:38.800 { 00:16:38.800 "name": "BaseBdev3", 00:16:38.800 "uuid": "00edac8f-5921-5f67-83c4-ae49e96b601b", 00:16:38.800 "is_configured": true, 00:16:38.800 "data_offset": 2048, 00:16:38.800 "data_size": 63488 00:16:38.800 }, 00:16:38.800 { 00:16:38.800 "name": "BaseBdev4", 00:16:38.800 "uuid": "ea650bb7-ba20-5b8e-81b0-97f90d999b92", 00:16:38.800 "is_configured": true, 00:16:38.800 "data_offset": 2048, 00:16:38.800 "data_size": 63488 00:16:38.800 } 00:16:38.800 ] 00:16:38.800 }' 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:38.800 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.060 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.060 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:39.060 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.060 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.060 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.060 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.060 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:39.060 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.060 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.060 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.060 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.060 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.060 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.060 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.060 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.060 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.060 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.060 "name": "raid_bdev1", 00:16:39.060 "uuid": "1b6a4ce9-c416-436b-9a98-536bedbd4a35", 00:16:39.060 "strip_size_kb": 0, 00:16:39.060 "state": "online", 00:16:39.060 "raid_level": "raid1", 00:16:39.060 "superblock": true, 00:16:39.060 "num_base_bdevs": 4, 00:16:39.060 "num_base_bdevs_discovered": 3, 00:16:39.060 "num_base_bdevs_operational": 3, 00:16:39.060 "base_bdevs_list": [ 00:16:39.060 { 00:16:39.060 "name": "spare", 00:16:39.060 "uuid": "6bffb84f-7cdf-5044-9bd2-4f59564a8a95", 00:16:39.060 "is_configured": true, 00:16:39.060 "data_offset": 2048, 00:16:39.060 "data_size": 63488 00:16:39.060 }, 00:16:39.060 { 00:16:39.060 "name": null, 00:16:39.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.060 "is_configured": false, 00:16:39.060 "data_offset": 0, 00:16:39.060 "data_size": 63488 00:16:39.060 }, 00:16:39.060 { 00:16:39.060 "name": "BaseBdev3", 00:16:39.060 "uuid": "00edac8f-5921-5f67-83c4-ae49e96b601b", 00:16:39.060 "is_configured": true, 00:16:39.060 "data_offset": 2048, 00:16:39.060 "data_size": 63488 00:16:39.060 }, 00:16:39.060 { 00:16:39.060 "name": "BaseBdev4", 00:16:39.060 "uuid": "ea650bb7-ba20-5b8e-81b0-97f90d999b92", 00:16:39.060 "is_configured": true, 00:16:39.060 "data_offset": 2048, 00:16:39.060 "data_size": 63488 00:16:39.060 } 00:16:39.060 ] 00:16:39.060 }' 00:16:39.060 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.060 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.321 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:39.321 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.321 04:33:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.321 [2024-11-27 04:33:35.879406] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:39.321 [2024-11-27 04:33:35.879456] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:39.580 80.88 IOPS, 242.62 MiB/s 00:16:39.580 Latency(us) 00:16:39.580 [2024-11-27T04:33:36.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.580 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:39.580 raid_bdev1 : 8.05 80.62 241.87 0.00 0.00 16862.99 368.46 119968.08 00:16:39.580 [2024-11-27T04:33:36.167Z] =================================================================================================================== 00:16:39.580 [2024-11-27T04:33:36.167Z] Total : 80.62 241.87 0.00 0.00 16862.99 368.46 119968.08 00:16:39.580 [2024-11-27 04:33:35.999577] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.580 [2024-11-27 04:33:35.999656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.580 [2024-11-27 04:33:35.999759] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:39.580 [2024-11-27 04:33:35.999772] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:39.580 { 00:16:39.580 "results": [ 00:16:39.580 { 00:16:39.580 "job": "raid_bdev1", 00:16:39.580 "core_mask": "0x1", 00:16:39.580 "workload": "randrw", 00:16:39.580 "percentage": 50, 00:16:39.580 "status": "finished", 00:16:39.580 "queue_depth": 2, 00:16:39.580 "io_size": 3145728, 00:16:39.580 "runtime": 8.049702, 00:16:39.580 "iops": 80.62410260653127, 00:16:39.580 "mibps": 241.8723078195938, 00:16:39.580 "io_failed": 0, 00:16:39.580 "io_timeout": 0, 00:16:39.580 "avg_latency_us": 16862.994790776538, 00:16:39.580 "min_latency_us": 368.461135371179, 00:16:39.580 "max_latency_us": 119968.08384279476 00:16:39.580 } 00:16:39.580 ], 00:16:39.580 "core_count": 1 00:16:39.580 } 00:16:39.580 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.580 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:39.580 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.580 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.580 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.580 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.580 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:39.580 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:39.580 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:39.580 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:39.580 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:39.580 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:39.580 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:39.580 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:39.580 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:39.580 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:39.580 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:39.580 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:39.580 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:39.839 /dev/nbd0 00:16:39.839 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:39.839 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:39.839 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:39.839 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:39.839 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:39.839 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:39.839 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:39.839 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:39.839 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:39.839 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:39.839 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:39.839 1+0 records in 00:16:39.839 1+0 records out 00:16:39.839 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000531625 s, 7.7 MB/s 00:16:39.839 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.839 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:39.839 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.839 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:39.839 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:39.839 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:39.840 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:39.840 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:39.840 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:39.840 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:39.840 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:39.840 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:39.840 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:39.840 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:39.840 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:39.840 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:39.840 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:39.840 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:39.840 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:39.840 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:39.840 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:39.840 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:40.099 /dev/nbd1 00:16:40.099 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:40.099 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:40.099 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:40.099 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:40.099 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:40.099 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:40.099 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:40.099 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:40.099 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:40.099 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:40.099 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:40.099 1+0 records in 00:16:40.099 1+0 records out 00:16:40.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565543 s, 7.2 MB/s 00:16:40.099 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.099 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:40.099 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.099 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:40.099 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:40.099 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:40.099 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:40.099 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:40.357 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:40.357 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:40.357 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:40.357 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:40.357 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:40.357 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:40.357 04:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:40.616 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:40.616 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:40.616 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:40.616 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:40.616 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:40.616 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:40.616 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:40.616 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:40.616 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:40.616 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:40.616 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:40.616 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:40.616 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:40.616 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:40.616 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:40.616 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:40.616 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:40.616 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:40.616 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:40.616 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:40.875 /dev/nbd1 00:16:40.875 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:40.875 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:40.875 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:40.875 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:40.875 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:40.875 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:40.875 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:40.875 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:40.875 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:40.875 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:40.875 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:40.875 1+0 records in 00:16:40.875 1+0 records out 00:16:40.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330581 s, 12.4 MB/s 00:16:40.875 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.875 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:40.875 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.875 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:40.875 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:40.875 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:40.875 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:40.875 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:40.875 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:40.876 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:40.876 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:40.876 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:40.876 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:40.876 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:40.876 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:41.134 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:41.134 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:41.134 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:41.134 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:41.134 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:41.134 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:41.135 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:41.135 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:41.135 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:41.135 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:41.135 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:41.135 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:41.135 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:41.135 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:41.135 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:41.393 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:41.393 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:41.393 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:41.393 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:41.393 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:41.393 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:41.393 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:41.393 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:41.393 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:41.393 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:41.393 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.393 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.393 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.393 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:41.393 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.394 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.394 [2024-11-27 04:33:37.944590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:41.394 [2024-11-27 04:33:37.944656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.394 [2024-11-27 04:33:37.944680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:41.394 [2024-11-27 04:33:37.944691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.394 [2024-11-27 04:33:37.947130] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.394 [2024-11-27 04:33:37.947166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:41.394 [2024-11-27 04:33:37.947258] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:41.394 [2024-11-27 04:33:37.947307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:41.394 [2024-11-27 04:33:37.947475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:41.394 [2024-11-27 04:33:37.947597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:41.394 spare 00:16:41.394 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.394 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:41.394 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.394 04:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.653 [2024-11-27 04:33:38.047511] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:41.653 [2024-11-27 04:33:38.047648] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:41.653 [2024-11-27 04:33:38.048072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:16:41.653 [2024-11-27 04:33:38.048348] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:41.653 [2024-11-27 04:33:38.048360] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:41.653 [2024-11-27 04:33:38.048602] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.653 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.653 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:41.653 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.653 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.653 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.653 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.653 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:41.653 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.653 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.653 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.653 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.653 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.653 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.653 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.653 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.653 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.653 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.653 "name": "raid_bdev1", 00:16:41.653 "uuid": "1b6a4ce9-c416-436b-9a98-536bedbd4a35", 00:16:41.653 "strip_size_kb": 0, 00:16:41.653 "state": "online", 00:16:41.653 "raid_level": "raid1", 00:16:41.653 "superblock": true, 00:16:41.653 "num_base_bdevs": 4, 00:16:41.653 "num_base_bdevs_discovered": 3, 00:16:41.653 "num_base_bdevs_operational": 3, 00:16:41.653 "base_bdevs_list": [ 00:16:41.653 { 00:16:41.653 "name": "spare", 00:16:41.653 "uuid": "6bffb84f-7cdf-5044-9bd2-4f59564a8a95", 00:16:41.653 "is_configured": true, 00:16:41.653 "data_offset": 2048, 00:16:41.653 "data_size": 63488 00:16:41.653 }, 00:16:41.653 { 00:16:41.653 "name": null, 00:16:41.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.653 "is_configured": false, 00:16:41.653 "data_offset": 2048, 00:16:41.653 "data_size": 63488 00:16:41.653 }, 00:16:41.653 { 00:16:41.653 "name": "BaseBdev3", 00:16:41.653 "uuid": "00edac8f-5921-5f67-83c4-ae49e96b601b", 00:16:41.653 "is_configured": true, 00:16:41.653 "data_offset": 2048, 00:16:41.653 "data_size": 63488 00:16:41.653 }, 00:16:41.653 { 00:16:41.653 "name": "BaseBdev4", 00:16:41.653 "uuid": "ea650bb7-ba20-5b8e-81b0-97f90d999b92", 00:16:41.653 "is_configured": true, 00:16:41.653 "data_offset": 2048, 00:16:41.653 "data_size": 63488 00:16:41.653 } 00:16:41.653 ] 00:16:41.653 }' 00:16:41.653 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.653 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.913 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:41.913 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.913 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:41.913 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:41.913 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.913 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.913 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.913 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.913 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.173 "name": "raid_bdev1", 00:16:42.173 "uuid": "1b6a4ce9-c416-436b-9a98-536bedbd4a35", 00:16:42.173 "strip_size_kb": 0, 00:16:42.173 "state": "online", 00:16:42.173 "raid_level": "raid1", 00:16:42.173 "superblock": true, 00:16:42.173 "num_base_bdevs": 4, 00:16:42.173 "num_base_bdevs_discovered": 3, 00:16:42.173 "num_base_bdevs_operational": 3, 00:16:42.173 "base_bdevs_list": [ 00:16:42.173 { 00:16:42.173 "name": "spare", 00:16:42.173 "uuid": "6bffb84f-7cdf-5044-9bd2-4f59564a8a95", 00:16:42.173 "is_configured": true, 00:16:42.173 "data_offset": 2048, 00:16:42.173 "data_size": 63488 00:16:42.173 }, 00:16:42.173 { 00:16:42.173 "name": null, 00:16:42.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.173 "is_configured": false, 00:16:42.173 "data_offset": 2048, 00:16:42.173 "data_size": 63488 00:16:42.173 }, 00:16:42.173 { 00:16:42.173 "name": "BaseBdev3", 00:16:42.173 "uuid": "00edac8f-5921-5f67-83c4-ae49e96b601b", 00:16:42.173 "is_configured": true, 00:16:42.173 "data_offset": 2048, 00:16:42.173 "data_size": 63488 00:16:42.173 }, 00:16:42.173 { 00:16:42.173 "name": "BaseBdev4", 00:16:42.173 "uuid": "ea650bb7-ba20-5b8e-81b0-97f90d999b92", 00:16:42.173 "is_configured": true, 00:16:42.173 "data_offset": 2048, 00:16:42.173 "data_size": 63488 00:16:42.173 } 00:16:42.173 ] 00:16:42.173 }' 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:42.173 [2024-11-27 04:33:38.655650] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.173 "name": "raid_bdev1", 00:16:42.173 "uuid": "1b6a4ce9-c416-436b-9a98-536bedbd4a35", 00:16:42.173 "strip_size_kb": 0, 00:16:42.173 "state": "online", 00:16:42.173 "raid_level": "raid1", 00:16:42.173 "superblock": true, 00:16:42.173 "num_base_bdevs": 4, 00:16:42.173 "num_base_bdevs_discovered": 2, 00:16:42.173 "num_base_bdevs_operational": 2, 00:16:42.173 "base_bdevs_list": [ 00:16:42.173 { 00:16:42.173 "name": null, 00:16:42.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.173 "is_configured": false, 00:16:42.173 "data_offset": 0, 00:16:42.173 "data_size": 63488 00:16:42.173 }, 00:16:42.173 { 00:16:42.173 "name": null, 00:16:42.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.173 "is_configured": false, 00:16:42.173 "data_offset": 2048, 00:16:42.173 "data_size": 63488 00:16:42.173 }, 00:16:42.173 { 00:16:42.173 "name": "BaseBdev3", 00:16:42.173 "uuid": "00edac8f-5921-5f67-83c4-ae49e96b601b", 00:16:42.173 "is_configured": true, 00:16:42.173 "data_offset": 2048, 00:16:42.173 "data_size": 63488 00:16:42.173 }, 00:16:42.173 { 00:16:42.173 "name": "BaseBdev4", 00:16:42.173 "uuid": "ea650bb7-ba20-5b8e-81b0-97f90d999b92", 00:16:42.173 "is_configured": true, 00:16:42.173 "data_offset": 2048, 00:16:42.173 "data_size": 63488 00:16:42.173 } 00:16:42.173 ] 00:16:42.173 }' 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.173 04:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:42.740 04:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:42.740 04:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.740 04:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:42.740 [2024-11-27 04:33:39.131217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:42.740 [2024-11-27 04:33:39.131514] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:42.740 [2024-11-27 04:33:39.131591] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:42.740 [2024-11-27 04:33:39.131658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:42.740 [2024-11-27 04:33:39.149554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:16:42.740 04:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.740 04:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:42.740 [2024-11-27 04:33:39.151804] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:43.677 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:43.677 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.677 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:43.677 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:43.677 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.677 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.677 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.677 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.677 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:43.677 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.677 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.677 "name": "raid_bdev1", 00:16:43.677 "uuid": "1b6a4ce9-c416-436b-9a98-536bedbd4a35", 00:16:43.677 "strip_size_kb": 0, 00:16:43.677 "state": "online", 00:16:43.677 "raid_level": "raid1", 00:16:43.677 "superblock": true, 00:16:43.677 "num_base_bdevs": 4, 00:16:43.677 "num_base_bdevs_discovered": 3, 00:16:43.677 "num_base_bdevs_operational": 3, 00:16:43.677 "process": { 00:16:43.677 "type": "rebuild", 00:16:43.677 "target": "spare", 00:16:43.677 "progress": { 00:16:43.677 "blocks": 20480, 00:16:43.677 "percent": 32 00:16:43.677 } 00:16:43.677 }, 00:16:43.677 "base_bdevs_list": [ 00:16:43.677 { 00:16:43.677 "name": "spare", 00:16:43.677 "uuid": "6bffb84f-7cdf-5044-9bd2-4f59564a8a95", 00:16:43.677 "is_configured": true, 00:16:43.677 "data_offset": 2048, 00:16:43.677 "data_size": 63488 00:16:43.677 }, 00:16:43.677 { 00:16:43.677 "name": null, 00:16:43.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.677 "is_configured": false, 00:16:43.677 "data_offset": 2048, 00:16:43.677 "data_size": 63488 00:16:43.677 }, 00:16:43.677 { 00:16:43.677 "name": "BaseBdev3", 00:16:43.677 "uuid": "00edac8f-5921-5f67-83c4-ae49e96b601b", 00:16:43.677 "is_configured": true, 00:16:43.677 "data_offset": 2048, 00:16:43.677 "data_size": 63488 00:16:43.677 }, 00:16:43.677 { 00:16:43.677 "name": "BaseBdev4", 00:16:43.677 "uuid": "ea650bb7-ba20-5b8e-81b0-97f90d999b92", 00:16:43.677 "is_configured": true, 00:16:43.677 "data_offset": 2048, 00:16:43.677 "data_size": 63488 00:16:43.677 } 00:16:43.677 ] 00:16:43.677 }' 00:16:43.677 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.677 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:43.677 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.937 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:43.937 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:43.937 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.937 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:43.938 [2024-11-27 04:33:40.314826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:43.938 [2024-11-27 04:33:40.357724] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:43.938 [2024-11-27 04:33:40.357870] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.938 [2024-11-27 04:33:40.357908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:43.938 [2024-11-27 04:33:40.357922] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:43.938 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.938 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:43.938 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.938 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.938 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.938 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.938 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:43.938 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.938 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.938 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.938 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.938 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.938 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.938 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:43.938 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.938 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.938 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.938 "name": "raid_bdev1", 00:16:43.938 "uuid": "1b6a4ce9-c416-436b-9a98-536bedbd4a35", 00:16:43.938 "strip_size_kb": 0, 00:16:43.938 "state": "online", 00:16:43.938 "raid_level": "raid1", 00:16:43.938 "superblock": true, 00:16:43.938 "num_base_bdevs": 4, 00:16:43.938 "num_base_bdevs_discovered": 2, 00:16:43.938 "num_base_bdevs_operational": 2, 00:16:43.938 "base_bdevs_list": [ 00:16:43.938 { 00:16:43.938 "name": null, 00:16:43.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.938 "is_configured": false, 00:16:43.938 "data_offset": 0, 00:16:43.938 "data_size": 63488 00:16:43.938 }, 00:16:43.938 { 00:16:43.938 "name": null, 00:16:43.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.938 "is_configured": false, 00:16:43.938 "data_offset": 2048, 00:16:43.938 "data_size": 63488 00:16:43.938 }, 00:16:43.938 { 00:16:43.938 "name": "BaseBdev3", 00:16:43.938 "uuid": "00edac8f-5921-5f67-83c4-ae49e96b601b", 00:16:43.938 "is_configured": true, 00:16:43.938 "data_offset": 2048, 00:16:43.938 "data_size": 63488 00:16:43.938 }, 00:16:43.938 { 00:16:43.938 "name": "BaseBdev4", 00:16:43.938 "uuid": "ea650bb7-ba20-5b8e-81b0-97f90d999b92", 00:16:43.938 "is_configured": true, 00:16:43.938 "data_offset": 2048, 00:16:43.938 "data_size": 63488 00:16:43.938 } 00:16:43.938 ] 00:16:43.938 }' 00:16:43.938 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.938 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.507 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:44.507 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.507 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.507 [2024-11-27 04:33:40.892275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:44.507 [2024-11-27 04:33:40.892397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.507 [2024-11-27 04:33:40.892460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:44.507 [2024-11-27 04:33:40.892494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.507 [2024-11-27 04:33:40.893030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.507 [2024-11-27 04:33:40.893115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:44.507 [2024-11-27 04:33:40.893255] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:44.507 [2024-11-27 04:33:40.893305] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:44.507 [2024-11-27 04:33:40.893352] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:44.507 [2024-11-27 04:33:40.893432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:44.507 [2024-11-27 04:33:40.909423] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:16:44.507 spare 00:16:44.507 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.507 [2024-11-27 04:33:40.911390] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:44.507 04:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:45.446 04:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.446 04:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.446 04:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.446 04:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.446 04:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.446 04:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.446 04:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.446 04:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.446 04:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.446 04:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.446 04:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.446 "name": "raid_bdev1", 00:16:45.446 "uuid": "1b6a4ce9-c416-436b-9a98-536bedbd4a35", 00:16:45.446 "strip_size_kb": 0, 00:16:45.446 "state": "online", 00:16:45.446 "raid_level": "raid1", 00:16:45.446 "superblock": true, 00:16:45.446 "num_base_bdevs": 4, 00:16:45.446 "num_base_bdevs_discovered": 3, 00:16:45.446 "num_base_bdevs_operational": 3, 00:16:45.446 "process": { 00:16:45.446 "type": "rebuild", 00:16:45.446 "target": "spare", 00:16:45.446 "progress": { 00:16:45.446 "blocks": 20480, 00:16:45.446 "percent": 32 00:16:45.446 } 00:16:45.446 }, 00:16:45.446 "base_bdevs_list": [ 00:16:45.446 { 00:16:45.446 "name": "spare", 00:16:45.446 "uuid": "6bffb84f-7cdf-5044-9bd2-4f59564a8a95", 00:16:45.446 "is_configured": true, 00:16:45.446 "data_offset": 2048, 00:16:45.446 "data_size": 63488 00:16:45.446 }, 00:16:45.446 { 00:16:45.446 "name": null, 00:16:45.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.446 "is_configured": false, 00:16:45.446 "data_offset": 2048, 00:16:45.446 "data_size": 63488 00:16:45.446 }, 00:16:45.446 { 00:16:45.446 "name": "BaseBdev3", 00:16:45.446 "uuid": "00edac8f-5921-5f67-83c4-ae49e96b601b", 00:16:45.446 "is_configured": true, 00:16:45.446 "data_offset": 2048, 00:16:45.446 "data_size": 63488 00:16:45.446 }, 00:16:45.446 { 00:16:45.446 "name": "BaseBdev4", 00:16:45.446 "uuid": "ea650bb7-ba20-5b8e-81b0-97f90d999b92", 00:16:45.446 "is_configured": true, 00:16:45.446 "data_offset": 2048, 00:16:45.446 "data_size": 63488 00:16:45.446 } 00:16:45.446 ] 00:16:45.446 }' 00:16:45.446 04:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.446 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.446 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.706 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.706 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:45.706 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.706 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.706 [2024-11-27 04:33:42.059049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:45.706 [2024-11-27 04:33:42.117319] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:45.706 [2024-11-27 04:33:42.117403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.706 [2024-11-27 04:33:42.117442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:45.706 [2024-11-27 04:33:42.117449] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:45.706 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.706 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:45.706 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.706 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.706 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.706 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.706 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:45.706 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.706 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.706 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.706 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.706 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.706 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.706 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.706 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.706 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.706 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.706 "name": "raid_bdev1", 00:16:45.706 "uuid": "1b6a4ce9-c416-436b-9a98-536bedbd4a35", 00:16:45.706 "strip_size_kb": 0, 00:16:45.706 "state": "online", 00:16:45.706 "raid_level": "raid1", 00:16:45.706 "superblock": true, 00:16:45.706 "num_base_bdevs": 4, 00:16:45.706 "num_base_bdevs_discovered": 2, 00:16:45.706 "num_base_bdevs_operational": 2, 00:16:45.706 "base_bdevs_list": [ 00:16:45.706 { 00:16:45.706 "name": null, 00:16:45.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.706 "is_configured": false, 00:16:45.706 "data_offset": 0, 00:16:45.706 "data_size": 63488 00:16:45.706 }, 00:16:45.706 { 00:16:45.706 "name": null, 00:16:45.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.706 "is_configured": false, 00:16:45.706 "data_offset": 2048, 00:16:45.706 "data_size": 63488 00:16:45.706 }, 00:16:45.706 { 00:16:45.706 "name": "BaseBdev3", 00:16:45.706 "uuid": "00edac8f-5921-5f67-83c4-ae49e96b601b", 00:16:45.706 "is_configured": true, 00:16:45.706 "data_offset": 2048, 00:16:45.706 "data_size": 63488 00:16:45.706 }, 00:16:45.706 { 00:16:45.706 "name": "BaseBdev4", 00:16:45.706 "uuid": "ea650bb7-ba20-5b8e-81b0-97f90d999b92", 00:16:45.706 "is_configured": true, 00:16:45.706 "data_offset": 2048, 00:16:45.706 "data_size": 63488 00:16:45.706 } 00:16:45.706 ] 00:16:45.706 }' 00:16:45.706 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.706 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.275 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:46.275 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.275 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:46.275 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:46.275 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.275 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.275 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.275 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.275 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.275 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.275 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.275 "name": "raid_bdev1", 00:16:46.275 "uuid": "1b6a4ce9-c416-436b-9a98-536bedbd4a35", 00:16:46.275 "strip_size_kb": 0, 00:16:46.275 "state": "online", 00:16:46.275 "raid_level": "raid1", 00:16:46.275 "superblock": true, 00:16:46.275 "num_base_bdevs": 4, 00:16:46.275 "num_base_bdevs_discovered": 2, 00:16:46.275 "num_base_bdevs_operational": 2, 00:16:46.275 "base_bdevs_list": [ 00:16:46.275 { 00:16:46.275 "name": null, 00:16:46.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.275 "is_configured": false, 00:16:46.275 "data_offset": 0, 00:16:46.275 "data_size": 63488 00:16:46.275 }, 00:16:46.275 { 00:16:46.275 "name": null, 00:16:46.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.275 "is_configured": false, 00:16:46.275 "data_offset": 2048, 00:16:46.275 "data_size": 63488 00:16:46.275 }, 00:16:46.275 { 00:16:46.275 "name": "BaseBdev3", 00:16:46.275 "uuid": "00edac8f-5921-5f67-83c4-ae49e96b601b", 00:16:46.275 "is_configured": true, 00:16:46.275 "data_offset": 2048, 00:16:46.275 "data_size": 63488 00:16:46.275 }, 00:16:46.275 { 00:16:46.276 "name": "BaseBdev4", 00:16:46.276 "uuid": "ea650bb7-ba20-5b8e-81b0-97f90d999b92", 00:16:46.276 "is_configured": true, 00:16:46.276 "data_offset": 2048, 00:16:46.276 "data_size": 63488 00:16:46.276 } 00:16:46.276 ] 00:16:46.276 }' 00:16:46.276 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.276 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:46.276 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.276 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:46.276 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:46.276 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.276 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.276 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.276 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:46.276 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.276 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.276 [2024-11-27 04:33:42.750525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:46.276 [2024-11-27 04:33:42.750584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.276 [2024-11-27 04:33:42.750607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:16:46.276 [2024-11-27 04:33:42.750633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.276 [2024-11-27 04:33:42.751142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.276 [2024-11-27 04:33:42.751173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:46.276 [2024-11-27 04:33:42.751272] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:46.276 [2024-11-27 04:33:42.751289] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:46.276 [2024-11-27 04:33:42.751300] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:46.276 [2024-11-27 04:33:42.751314] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:46.276 BaseBdev1 00:16:46.276 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.276 04:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:47.219 04:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:47.219 04:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.219 04:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.219 04:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:47.219 04:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:47.219 04:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:47.219 04:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.219 04:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.219 04:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.219 04:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.219 04:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.219 04:33:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.219 04:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.219 04:33:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.219 04:33:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.490 04:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.490 "name": "raid_bdev1", 00:16:47.490 "uuid": "1b6a4ce9-c416-436b-9a98-536bedbd4a35", 00:16:47.490 "strip_size_kb": 0, 00:16:47.490 "state": "online", 00:16:47.490 "raid_level": "raid1", 00:16:47.490 "superblock": true, 00:16:47.490 "num_base_bdevs": 4, 00:16:47.490 "num_base_bdevs_discovered": 2, 00:16:47.490 "num_base_bdevs_operational": 2, 00:16:47.490 "base_bdevs_list": [ 00:16:47.490 { 00:16:47.490 "name": null, 00:16:47.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.490 "is_configured": false, 00:16:47.490 "data_offset": 0, 00:16:47.490 "data_size": 63488 00:16:47.490 }, 00:16:47.490 { 00:16:47.490 "name": null, 00:16:47.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.490 "is_configured": false, 00:16:47.490 "data_offset": 2048, 00:16:47.490 "data_size": 63488 00:16:47.490 }, 00:16:47.490 { 00:16:47.490 "name": "BaseBdev3", 00:16:47.490 "uuid": "00edac8f-5921-5f67-83c4-ae49e96b601b", 00:16:47.490 "is_configured": true, 00:16:47.490 "data_offset": 2048, 00:16:47.490 "data_size": 63488 00:16:47.490 }, 00:16:47.490 { 00:16:47.490 "name": "BaseBdev4", 00:16:47.490 "uuid": "ea650bb7-ba20-5b8e-81b0-97f90d999b92", 00:16:47.490 "is_configured": true, 00:16:47.490 "data_offset": 2048, 00:16:47.490 "data_size": 63488 00:16:47.490 } 00:16:47.490 ] 00:16:47.490 }' 00:16:47.490 04:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.490 04:33:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.750 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:47.750 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.750 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:47.750 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:47.750 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.750 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.750 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.750 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.750 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.750 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.750 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.750 "name": "raid_bdev1", 00:16:47.750 "uuid": "1b6a4ce9-c416-436b-9a98-536bedbd4a35", 00:16:47.750 "strip_size_kb": 0, 00:16:47.750 "state": "online", 00:16:47.750 "raid_level": "raid1", 00:16:47.750 "superblock": true, 00:16:47.750 "num_base_bdevs": 4, 00:16:47.750 "num_base_bdevs_discovered": 2, 00:16:47.750 "num_base_bdevs_operational": 2, 00:16:47.750 "base_bdevs_list": [ 00:16:47.750 { 00:16:47.750 "name": null, 00:16:47.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.750 "is_configured": false, 00:16:47.750 "data_offset": 0, 00:16:47.750 "data_size": 63488 00:16:47.750 }, 00:16:47.750 { 00:16:47.750 "name": null, 00:16:47.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.750 "is_configured": false, 00:16:47.750 "data_offset": 2048, 00:16:47.750 "data_size": 63488 00:16:47.750 }, 00:16:47.750 { 00:16:47.750 "name": "BaseBdev3", 00:16:47.750 "uuid": "00edac8f-5921-5f67-83c4-ae49e96b601b", 00:16:47.750 "is_configured": true, 00:16:47.750 "data_offset": 2048, 00:16:47.750 "data_size": 63488 00:16:47.750 }, 00:16:47.750 { 00:16:47.750 "name": "BaseBdev4", 00:16:47.750 "uuid": "ea650bb7-ba20-5b8e-81b0-97f90d999b92", 00:16:47.750 "is_configured": true, 00:16:47.750 "data_offset": 2048, 00:16:47.750 "data_size": 63488 00:16:47.750 } 00:16:47.750 ] 00:16:47.750 }' 00:16:47.750 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.750 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:47.750 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.009 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:48.009 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:48.009 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:16:48.009 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:48.009 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:48.009 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:48.009 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:48.009 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:48.009 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:48.009 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.009 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.009 [2024-11-27 04:33:44.372023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:48.009 [2024-11-27 04:33:44.372215] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:48.009 [2024-11-27 04:33:44.372232] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:48.009 request: 00:16:48.009 { 00:16:48.009 "base_bdev": "BaseBdev1", 00:16:48.009 "raid_bdev": "raid_bdev1", 00:16:48.009 "method": "bdev_raid_add_base_bdev", 00:16:48.009 "req_id": 1 00:16:48.009 } 00:16:48.009 Got JSON-RPC error response 00:16:48.009 response: 00:16:48.009 { 00:16:48.009 "code": -22, 00:16:48.009 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:48.009 } 00:16:48.009 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:48.009 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:16:48.009 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:48.009 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:48.009 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:48.010 04:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:48.949 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:48.949 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.949 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.949 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.949 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.949 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:48.949 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.949 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.949 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.949 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.949 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.949 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.949 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.949 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.949 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.949 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.949 "name": "raid_bdev1", 00:16:48.949 "uuid": "1b6a4ce9-c416-436b-9a98-536bedbd4a35", 00:16:48.949 "strip_size_kb": 0, 00:16:48.949 "state": "online", 00:16:48.949 "raid_level": "raid1", 00:16:48.949 "superblock": true, 00:16:48.950 "num_base_bdevs": 4, 00:16:48.950 "num_base_bdevs_discovered": 2, 00:16:48.950 "num_base_bdevs_operational": 2, 00:16:48.950 "base_bdevs_list": [ 00:16:48.950 { 00:16:48.950 "name": null, 00:16:48.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.950 "is_configured": false, 00:16:48.950 "data_offset": 0, 00:16:48.950 "data_size": 63488 00:16:48.950 }, 00:16:48.950 { 00:16:48.950 "name": null, 00:16:48.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.950 "is_configured": false, 00:16:48.950 "data_offset": 2048, 00:16:48.950 "data_size": 63488 00:16:48.950 }, 00:16:48.950 { 00:16:48.950 "name": "BaseBdev3", 00:16:48.950 "uuid": "00edac8f-5921-5f67-83c4-ae49e96b601b", 00:16:48.950 "is_configured": true, 00:16:48.950 "data_offset": 2048, 00:16:48.950 "data_size": 63488 00:16:48.950 }, 00:16:48.950 { 00:16:48.950 "name": "BaseBdev4", 00:16:48.950 "uuid": "ea650bb7-ba20-5b8e-81b0-97f90d999b92", 00:16:48.950 "is_configured": true, 00:16:48.950 "data_offset": 2048, 00:16:48.950 "data_size": 63488 00:16:48.950 } 00:16:48.950 ] 00:16:48.950 }' 00:16:48.950 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.950 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.517 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:49.517 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.517 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:49.517 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:49.517 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.518 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.518 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.518 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.518 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.518 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.518 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.518 "name": "raid_bdev1", 00:16:49.518 "uuid": "1b6a4ce9-c416-436b-9a98-536bedbd4a35", 00:16:49.518 "strip_size_kb": 0, 00:16:49.518 "state": "online", 00:16:49.518 "raid_level": "raid1", 00:16:49.518 "superblock": true, 00:16:49.518 "num_base_bdevs": 4, 00:16:49.518 "num_base_bdevs_discovered": 2, 00:16:49.518 "num_base_bdevs_operational": 2, 00:16:49.518 "base_bdevs_list": [ 00:16:49.518 { 00:16:49.518 "name": null, 00:16:49.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.518 "is_configured": false, 00:16:49.518 "data_offset": 0, 00:16:49.518 "data_size": 63488 00:16:49.518 }, 00:16:49.518 { 00:16:49.518 "name": null, 00:16:49.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.518 "is_configured": false, 00:16:49.518 "data_offset": 2048, 00:16:49.518 "data_size": 63488 00:16:49.518 }, 00:16:49.518 { 00:16:49.518 "name": "BaseBdev3", 00:16:49.518 "uuid": "00edac8f-5921-5f67-83c4-ae49e96b601b", 00:16:49.518 "is_configured": true, 00:16:49.518 "data_offset": 2048, 00:16:49.518 "data_size": 63488 00:16:49.518 }, 00:16:49.518 { 00:16:49.518 "name": "BaseBdev4", 00:16:49.518 "uuid": "ea650bb7-ba20-5b8e-81b0-97f90d999b92", 00:16:49.518 "is_configured": true, 00:16:49.518 "data_offset": 2048, 00:16:49.518 "data_size": 63488 00:16:49.518 } 00:16:49.518 ] 00:16:49.518 }' 00:16:49.518 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.518 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:49.518 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.518 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:49.518 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79485 00:16:49.518 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79485 ']' 00:16:49.518 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79485 00:16:49.518 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:16:49.518 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:49.518 04:33:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79485 00:16:49.518 04:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:49.518 04:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:49.518 killing process with pid 79485 00:16:49.518 Received shutdown signal, test time was about 18.117936 seconds 00:16:49.518 00:16:49.518 Latency(us) 00:16:49.518 [2024-11-27T04:33:46.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.518 [2024-11-27T04:33:46.105Z] =================================================================================================================== 00:16:49.518 [2024-11-27T04:33:46.105Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:49.518 04:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79485' 00:16:49.518 04:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79485 00:16:49.518 [2024-11-27 04:33:46.024029] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:49.518 [2024-11-27 04:33:46.024191] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:49.518 04:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79485 00:16:49.518 [2024-11-27 04:33:46.024261] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:49.518 [2024-11-27 04:33:46.024275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:50.087 [2024-11-27 04:33:46.460063] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:51.469 04:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:51.469 00:16:51.469 real 0m21.650s 00:16:51.469 user 0m28.361s 00:16:51.469 sys 0m2.655s 00:16:51.469 ************************************ 00:16:51.469 END TEST raid_rebuild_test_sb_io 00:16:51.469 ************************************ 00:16:51.469 04:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:51.469 04:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.469 04:33:47 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:51.469 04:33:47 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:16:51.469 04:33:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:51.469 04:33:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:51.469 04:33:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:51.469 ************************************ 00:16:51.469 START TEST raid5f_state_function_test 00:16:51.469 ************************************ 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:51.469 Process raid pid: 80207 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80207 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80207' 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80207 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80207 ']' 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.469 04:33:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.469 [2024-11-27 04:33:47.855646] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:51.469 [2024-11-27 04:33:47.855772] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.469 [2024-11-27 04:33:48.034360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.728 [2024-11-27 04:33:48.149419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.987 [2024-11-27 04:33:48.367201] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:51.987 [2024-11-27 04:33:48.367249] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:52.246 04:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.246 04:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:52.246 04:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:52.246 04:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.246 04:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.246 [2024-11-27 04:33:48.716288] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:52.246 [2024-11-27 04:33:48.716357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:52.246 [2024-11-27 04:33:48.716369] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:52.246 [2024-11-27 04:33:48.716380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:52.246 [2024-11-27 04:33:48.716387] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:52.246 [2024-11-27 04:33:48.716396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:52.246 04:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.246 04:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:52.246 04:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.246 04:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.246 04:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.246 04:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.246 04:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.246 04:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.246 04:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.246 04:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.246 04:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.246 04:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.246 04:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.246 04:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.246 04:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.246 04:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.246 04:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.246 "name": "Existed_Raid", 00:16:52.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.246 "strip_size_kb": 64, 00:16:52.246 "state": "configuring", 00:16:52.246 "raid_level": "raid5f", 00:16:52.246 "superblock": false, 00:16:52.246 "num_base_bdevs": 3, 00:16:52.246 "num_base_bdevs_discovered": 0, 00:16:52.246 "num_base_bdevs_operational": 3, 00:16:52.246 "base_bdevs_list": [ 00:16:52.246 { 00:16:52.246 "name": "BaseBdev1", 00:16:52.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.247 "is_configured": false, 00:16:52.247 "data_offset": 0, 00:16:52.247 "data_size": 0 00:16:52.247 }, 00:16:52.247 { 00:16:52.247 "name": "BaseBdev2", 00:16:52.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.247 "is_configured": false, 00:16:52.247 "data_offset": 0, 00:16:52.247 "data_size": 0 00:16:52.247 }, 00:16:52.247 { 00:16:52.247 "name": "BaseBdev3", 00:16:52.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.247 "is_configured": false, 00:16:52.247 "data_offset": 0, 00:16:52.247 "data_size": 0 00:16:52.247 } 00:16:52.247 ] 00:16:52.247 }' 00:16:52.247 04:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.247 04:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.813 [2024-11-27 04:33:49.203349] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:52.813 [2024-11-27 04:33:49.203459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.813 [2024-11-27 04:33:49.211339] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:52.813 [2024-11-27 04:33:49.211447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:52.813 [2024-11-27 04:33:49.211500] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:52.813 [2024-11-27 04:33:49.211528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:52.813 [2024-11-27 04:33:49.211550] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:52.813 [2024-11-27 04:33:49.211576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.813 [2024-11-27 04:33:49.261815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:52.813 BaseBdev1 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.813 [ 00:16:52.813 { 00:16:52.813 "name": "BaseBdev1", 00:16:52.813 "aliases": [ 00:16:52.813 "258dc6a6-406a-477e-b9b1-be4dca07b26d" 00:16:52.813 ], 00:16:52.813 "product_name": "Malloc disk", 00:16:52.813 "block_size": 512, 00:16:52.813 "num_blocks": 65536, 00:16:52.813 "uuid": "258dc6a6-406a-477e-b9b1-be4dca07b26d", 00:16:52.813 "assigned_rate_limits": { 00:16:52.813 "rw_ios_per_sec": 0, 00:16:52.813 "rw_mbytes_per_sec": 0, 00:16:52.813 "r_mbytes_per_sec": 0, 00:16:52.813 "w_mbytes_per_sec": 0 00:16:52.813 }, 00:16:52.813 "claimed": true, 00:16:52.813 "claim_type": "exclusive_write", 00:16:52.813 "zoned": false, 00:16:52.813 "supported_io_types": { 00:16:52.813 "read": true, 00:16:52.813 "write": true, 00:16:52.813 "unmap": true, 00:16:52.813 "flush": true, 00:16:52.813 "reset": true, 00:16:52.813 "nvme_admin": false, 00:16:52.813 "nvme_io": false, 00:16:52.813 "nvme_io_md": false, 00:16:52.813 "write_zeroes": true, 00:16:52.813 "zcopy": true, 00:16:52.813 "get_zone_info": false, 00:16:52.813 "zone_management": false, 00:16:52.813 "zone_append": false, 00:16:52.813 "compare": false, 00:16:52.813 "compare_and_write": false, 00:16:52.813 "abort": true, 00:16:52.813 "seek_hole": false, 00:16:52.813 "seek_data": false, 00:16:52.813 "copy": true, 00:16:52.813 "nvme_iov_md": false 00:16:52.813 }, 00:16:52.813 "memory_domains": [ 00:16:52.813 { 00:16:52.813 "dma_device_id": "system", 00:16:52.813 "dma_device_type": 1 00:16:52.813 }, 00:16:52.813 { 00:16:52.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.813 "dma_device_type": 2 00:16:52.813 } 00:16:52.813 ], 00:16:52.813 "driver_specific": {} 00:16:52.813 } 00:16:52.813 ] 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.813 "name": "Existed_Raid", 00:16:52.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.813 "strip_size_kb": 64, 00:16:52.813 "state": "configuring", 00:16:52.813 "raid_level": "raid5f", 00:16:52.813 "superblock": false, 00:16:52.813 "num_base_bdevs": 3, 00:16:52.813 "num_base_bdevs_discovered": 1, 00:16:52.813 "num_base_bdevs_operational": 3, 00:16:52.813 "base_bdevs_list": [ 00:16:52.813 { 00:16:52.813 "name": "BaseBdev1", 00:16:52.813 "uuid": "258dc6a6-406a-477e-b9b1-be4dca07b26d", 00:16:52.813 "is_configured": true, 00:16:52.813 "data_offset": 0, 00:16:52.813 "data_size": 65536 00:16:52.813 }, 00:16:52.813 { 00:16:52.813 "name": "BaseBdev2", 00:16:52.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.813 "is_configured": false, 00:16:52.813 "data_offset": 0, 00:16:52.813 "data_size": 0 00:16:52.813 }, 00:16:52.813 { 00:16:52.813 "name": "BaseBdev3", 00:16:52.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.813 "is_configured": false, 00:16:52.813 "data_offset": 0, 00:16:52.813 "data_size": 0 00:16:52.813 } 00:16:52.813 ] 00:16:52.813 }' 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.813 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.382 [2024-11-27 04:33:49.757027] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:53.382 [2024-11-27 04:33:49.757103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.382 [2024-11-27 04:33:49.769037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:53.382 [2024-11-27 04:33:49.770837] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:53.382 [2024-11-27 04:33:49.770884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:53.382 [2024-11-27 04:33:49.770894] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:53.382 [2024-11-27 04:33:49.770902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.382 "name": "Existed_Raid", 00:16:53.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.382 "strip_size_kb": 64, 00:16:53.382 "state": "configuring", 00:16:53.382 "raid_level": "raid5f", 00:16:53.382 "superblock": false, 00:16:53.382 "num_base_bdevs": 3, 00:16:53.382 "num_base_bdevs_discovered": 1, 00:16:53.382 "num_base_bdevs_operational": 3, 00:16:53.382 "base_bdevs_list": [ 00:16:53.382 { 00:16:53.382 "name": "BaseBdev1", 00:16:53.382 "uuid": "258dc6a6-406a-477e-b9b1-be4dca07b26d", 00:16:53.382 "is_configured": true, 00:16:53.382 "data_offset": 0, 00:16:53.382 "data_size": 65536 00:16:53.382 }, 00:16:53.382 { 00:16:53.382 "name": "BaseBdev2", 00:16:53.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.382 "is_configured": false, 00:16:53.382 "data_offset": 0, 00:16:53.382 "data_size": 0 00:16:53.382 }, 00:16:53.382 { 00:16:53.382 "name": "BaseBdev3", 00:16:53.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.382 "is_configured": false, 00:16:53.382 "data_offset": 0, 00:16:53.382 "data_size": 0 00:16:53.382 } 00:16:53.382 ] 00:16:53.382 }' 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.382 04:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.951 [2024-11-27 04:33:50.309735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:53.951 BaseBdev2 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.951 [ 00:16:53.951 { 00:16:53.951 "name": "BaseBdev2", 00:16:53.951 "aliases": [ 00:16:53.951 "75e03644-4ad5-4c5e-8640-b2d3e65feff2" 00:16:53.951 ], 00:16:53.951 "product_name": "Malloc disk", 00:16:53.951 "block_size": 512, 00:16:53.951 "num_blocks": 65536, 00:16:53.951 "uuid": "75e03644-4ad5-4c5e-8640-b2d3e65feff2", 00:16:53.951 "assigned_rate_limits": { 00:16:53.951 "rw_ios_per_sec": 0, 00:16:53.951 "rw_mbytes_per_sec": 0, 00:16:53.951 "r_mbytes_per_sec": 0, 00:16:53.951 "w_mbytes_per_sec": 0 00:16:53.951 }, 00:16:53.951 "claimed": true, 00:16:53.951 "claim_type": "exclusive_write", 00:16:53.951 "zoned": false, 00:16:53.951 "supported_io_types": { 00:16:53.951 "read": true, 00:16:53.951 "write": true, 00:16:53.951 "unmap": true, 00:16:53.951 "flush": true, 00:16:53.951 "reset": true, 00:16:53.951 "nvme_admin": false, 00:16:53.951 "nvme_io": false, 00:16:53.951 "nvme_io_md": false, 00:16:53.951 "write_zeroes": true, 00:16:53.951 "zcopy": true, 00:16:53.951 "get_zone_info": false, 00:16:53.951 "zone_management": false, 00:16:53.951 "zone_append": false, 00:16:53.951 "compare": false, 00:16:53.951 "compare_and_write": false, 00:16:53.951 "abort": true, 00:16:53.951 "seek_hole": false, 00:16:53.951 "seek_data": false, 00:16:53.951 "copy": true, 00:16:53.951 "nvme_iov_md": false 00:16:53.951 }, 00:16:53.951 "memory_domains": [ 00:16:53.951 { 00:16:53.951 "dma_device_id": "system", 00:16:53.951 "dma_device_type": 1 00:16:53.951 }, 00:16:53.951 { 00:16:53.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.951 "dma_device_type": 2 00:16:53.951 } 00:16:53.951 ], 00:16:53.951 "driver_specific": {} 00:16:53.951 } 00:16:53.951 ] 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.951 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.951 "name": "Existed_Raid", 00:16:53.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.951 "strip_size_kb": 64, 00:16:53.951 "state": "configuring", 00:16:53.951 "raid_level": "raid5f", 00:16:53.951 "superblock": false, 00:16:53.951 "num_base_bdevs": 3, 00:16:53.951 "num_base_bdevs_discovered": 2, 00:16:53.951 "num_base_bdevs_operational": 3, 00:16:53.951 "base_bdevs_list": [ 00:16:53.951 { 00:16:53.951 "name": "BaseBdev1", 00:16:53.951 "uuid": "258dc6a6-406a-477e-b9b1-be4dca07b26d", 00:16:53.951 "is_configured": true, 00:16:53.951 "data_offset": 0, 00:16:53.951 "data_size": 65536 00:16:53.951 }, 00:16:53.951 { 00:16:53.951 "name": "BaseBdev2", 00:16:53.951 "uuid": "75e03644-4ad5-4c5e-8640-b2d3e65feff2", 00:16:53.952 "is_configured": true, 00:16:53.952 "data_offset": 0, 00:16:53.952 "data_size": 65536 00:16:53.952 }, 00:16:53.952 { 00:16:53.952 "name": "BaseBdev3", 00:16:53.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.952 "is_configured": false, 00:16:53.952 "data_offset": 0, 00:16:53.952 "data_size": 0 00:16:53.952 } 00:16:53.952 ] 00:16:53.952 }' 00:16:53.952 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.952 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.521 [2024-11-27 04:33:50.858840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:54.521 [2024-11-27 04:33:50.858904] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:54.521 [2024-11-27 04:33:50.858918] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:54.521 [2024-11-27 04:33:50.859193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:54.521 [2024-11-27 04:33:50.864598] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:54.521 [2024-11-27 04:33:50.864620] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:54.521 [2024-11-27 04:33:50.864883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.521 BaseBdev3 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.521 [ 00:16:54.521 { 00:16:54.521 "name": "BaseBdev3", 00:16:54.521 "aliases": [ 00:16:54.521 "d08c89b9-8f20-41d6-81ad-fe03b5a3e76b" 00:16:54.521 ], 00:16:54.521 "product_name": "Malloc disk", 00:16:54.521 "block_size": 512, 00:16:54.521 "num_blocks": 65536, 00:16:54.521 "uuid": "d08c89b9-8f20-41d6-81ad-fe03b5a3e76b", 00:16:54.521 "assigned_rate_limits": { 00:16:54.521 "rw_ios_per_sec": 0, 00:16:54.521 "rw_mbytes_per_sec": 0, 00:16:54.521 "r_mbytes_per_sec": 0, 00:16:54.521 "w_mbytes_per_sec": 0 00:16:54.521 }, 00:16:54.521 "claimed": true, 00:16:54.521 "claim_type": "exclusive_write", 00:16:54.521 "zoned": false, 00:16:54.521 "supported_io_types": { 00:16:54.521 "read": true, 00:16:54.521 "write": true, 00:16:54.521 "unmap": true, 00:16:54.521 "flush": true, 00:16:54.521 "reset": true, 00:16:54.521 "nvme_admin": false, 00:16:54.521 "nvme_io": false, 00:16:54.521 "nvme_io_md": false, 00:16:54.521 "write_zeroes": true, 00:16:54.521 "zcopy": true, 00:16:54.521 "get_zone_info": false, 00:16:54.521 "zone_management": false, 00:16:54.521 "zone_append": false, 00:16:54.521 "compare": false, 00:16:54.521 "compare_and_write": false, 00:16:54.521 "abort": true, 00:16:54.521 "seek_hole": false, 00:16:54.521 "seek_data": false, 00:16:54.521 "copy": true, 00:16:54.521 "nvme_iov_md": false 00:16:54.521 }, 00:16:54.521 "memory_domains": [ 00:16:54.521 { 00:16:54.521 "dma_device_id": "system", 00:16:54.521 "dma_device_type": 1 00:16:54.521 }, 00:16:54.521 { 00:16:54.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.521 "dma_device_type": 2 00:16:54.521 } 00:16:54.521 ], 00:16:54.521 "driver_specific": {} 00:16:54.521 } 00:16:54.521 ] 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.521 "name": "Existed_Raid", 00:16:54.521 "uuid": "10d9960a-0e87-41da-8007-c8bd5e6c1bd4", 00:16:54.521 "strip_size_kb": 64, 00:16:54.521 "state": "online", 00:16:54.521 "raid_level": "raid5f", 00:16:54.521 "superblock": false, 00:16:54.521 "num_base_bdevs": 3, 00:16:54.521 "num_base_bdevs_discovered": 3, 00:16:54.521 "num_base_bdevs_operational": 3, 00:16:54.521 "base_bdevs_list": [ 00:16:54.521 { 00:16:54.521 "name": "BaseBdev1", 00:16:54.521 "uuid": "258dc6a6-406a-477e-b9b1-be4dca07b26d", 00:16:54.521 "is_configured": true, 00:16:54.521 "data_offset": 0, 00:16:54.521 "data_size": 65536 00:16:54.521 }, 00:16:54.521 { 00:16:54.521 "name": "BaseBdev2", 00:16:54.521 "uuid": "75e03644-4ad5-4c5e-8640-b2d3e65feff2", 00:16:54.521 "is_configured": true, 00:16:54.521 "data_offset": 0, 00:16:54.521 "data_size": 65536 00:16:54.521 }, 00:16:54.521 { 00:16:54.521 "name": "BaseBdev3", 00:16:54.521 "uuid": "d08c89b9-8f20-41d6-81ad-fe03b5a3e76b", 00:16:54.521 "is_configured": true, 00:16:54.521 "data_offset": 0, 00:16:54.521 "data_size": 65536 00:16:54.521 } 00:16:54.521 ] 00:16:54.521 }' 00:16:54.521 04:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.522 04:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.782 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:54.782 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:54.782 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:54.782 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:54.782 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:54.782 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:54.782 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:54.782 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:54.782 04:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.782 04:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.782 [2024-11-27 04:33:51.326812] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.782 04:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.782 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:54.782 "name": "Existed_Raid", 00:16:54.782 "aliases": [ 00:16:54.782 "10d9960a-0e87-41da-8007-c8bd5e6c1bd4" 00:16:54.782 ], 00:16:54.782 "product_name": "Raid Volume", 00:16:54.782 "block_size": 512, 00:16:54.782 "num_blocks": 131072, 00:16:54.782 "uuid": "10d9960a-0e87-41da-8007-c8bd5e6c1bd4", 00:16:54.782 "assigned_rate_limits": { 00:16:54.782 "rw_ios_per_sec": 0, 00:16:54.782 "rw_mbytes_per_sec": 0, 00:16:54.782 "r_mbytes_per_sec": 0, 00:16:54.782 "w_mbytes_per_sec": 0 00:16:54.782 }, 00:16:54.782 "claimed": false, 00:16:54.782 "zoned": false, 00:16:54.782 "supported_io_types": { 00:16:54.782 "read": true, 00:16:54.782 "write": true, 00:16:54.782 "unmap": false, 00:16:54.782 "flush": false, 00:16:54.782 "reset": true, 00:16:54.782 "nvme_admin": false, 00:16:54.782 "nvme_io": false, 00:16:54.782 "nvme_io_md": false, 00:16:54.782 "write_zeroes": true, 00:16:54.782 "zcopy": false, 00:16:54.782 "get_zone_info": false, 00:16:54.782 "zone_management": false, 00:16:54.782 "zone_append": false, 00:16:54.782 "compare": false, 00:16:54.782 "compare_and_write": false, 00:16:54.782 "abort": false, 00:16:54.782 "seek_hole": false, 00:16:54.782 "seek_data": false, 00:16:54.782 "copy": false, 00:16:54.782 "nvme_iov_md": false 00:16:54.782 }, 00:16:54.782 "driver_specific": { 00:16:54.782 "raid": { 00:16:54.782 "uuid": "10d9960a-0e87-41da-8007-c8bd5e6c1bd4", 00:16:54.782 "strip_size_kb": 64, 00:16:54.782 "state": "online", 00:16:54.782 "raid_level": "raid5f", 00:16:54.782 "superblock": false, 00:16:54.782 "num_base_bdevs": 3, 00:16:54.782 "num_base_bdevs_discovered": 3, 00:16:54.782 "num_base_bdevs_operational": 3, 00:16:54.782 "base_bdevs_list": [ 00:16:54.782 { 00:16:54.782 "name": "BaseBdev1", 00:16:54.782 "uuid": "258dc6a6-406a-477e-b9b1-be4dca07b26d", 00:16:54.782 "is_configured": true, 00:16:54.782 "data_offset": 0, 00:16:54.782 "data_size": 65536 00:16:54.782 }, 00:16:54.782 { 00:16:54.782 "name": "BaseBdev2", 00:16:54.782 "uuid": "75e03644-4ad5-4c5e-8640-b2d3e65feff2", 00:16:54.782 "is_configured": true, 00:16:54.782 "data_offset": 0, 00:16:54.782 "data_size": 65536 00:16:54.782 }, 00:16:54.782 { 00:16:54.782 "name": "BaseBdev3", 00:16:54.782 "uuid": "d08c89b9-8f20-41d6-81ad-fe03b5a3e76b", 00:16:54.782 "is_configured": true, 00:16:54.782 "data_offset": 0, 00:16:54.782 "data_size": 65536 00:16:54.782 } 00:16:54.782 ] 00:16:54.782 } 00:16:54.782 } 00:16:54.782 }' 00:16:54.782 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:55.068 BaseBdev2 00:16:55.068 BaseBdev3' 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.068 04:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.068 [2024-11-27 04:33:51.582209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:55.340 04:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.340 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:55.340 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:55.340 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:55.340 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:55.340 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:55.340 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:55.340 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.340 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.340 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.340 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.340 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:55.340 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.340 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.340 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.340 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.340 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.340 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.340 04:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.340 04:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.340 04:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.340 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.340 "name": "Existed_Raid", 00:16:55.340 "uuid": "10d9960a-0e87-41da-8007-c8bd5e6c1bd4", 00:16:55.340 "strip_size_kb": 64, 00:16:55.340 "state": "online", 00:16:55.340 "raid_level": "raid5f", 00:16:55.340 "superblock": false, 00:16:55.340 "num_base_bdevs": 3, 00:16:55.340 "num_base_bdevs_discovered": 2, 00:16:55.340 "num_base_bdevs_operational": 2, 00:16:55.340 "base_bdevs_list": [ 00:16:55.340 { 00:16:55.340 "name": null, 00:16:55.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.340 "is_configured": false, 00:16:55.340 "data_offset": 0, 00:16:55.340 "data_size": 65536 00:16:55.340 }, 00:16:55.340 { 00:16:55.340 "name": "BaseBdev2", 00:16:55.340 "uuid": "75e03644-4ad5-4c5e-8640-b2d3e65feff2", 00:16:55.340 "is_configured": true, 00:16:55.340 "data_offset": 0, 00:16:55.340 "data_size": 65536 00:16:55.340 }, 00:16:55.340 { 00:16:55.340 "name": "BaseBdev3", 00:16:55.340 "uuid": "d08c89b9-8f20-41d6-81ad-fe03b5a3e76b", 00:16:55.340 "is_configured": true, 00:16:55.340 "data_offset": 0, 00:16:55.340 "data_size": 65536 00:16:55.340 } 00:16:55.340 ] 00:16:55.340 }' 00:16:55.340 04:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.340 04:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.599 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:55.599 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:55.599 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.599 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:55.599 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.599 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.599 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.599 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:55.599 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:55.599 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:55.599 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.599 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.599 [2024-11-27 04:33:52.147231] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:55.599 [2024-11-27 04:33:52.147380] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:55.859 [2024-11-27 04:33:52.244775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:55.859 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.859 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:55.859 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:55.859 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.859 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:55.859 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.859 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.859 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.859 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:55.859 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:55.859 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:55.859 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.859 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.859 [2024-11-27 04:33:52.304737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:55.859 [2024-11-27 04:33:52.304795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:55.859 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.859 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:55.859 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:55.859 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.859 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.859 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:55.859 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.859 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.120 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.121 BaseBdev2 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.121 [ 00:16:56.121 { 00:16:56.121 "name": "BaseBdev2", 00:16:56.121 "aliases": [ 00:16:56.121 "844c838c-1490-4a64-a1c9-876ffd16661d" 00:16:56.121 ], 00:16:56.121 "product_name": "Malloc disk", 00:16:56.121 "block_size": 512, 00:16:56.121 "num_blocks": 65536, 00:16:56.121 "uuid": "844c838c-1490-4a64-a1c9-876ffd16661d", 00:16:56.121 "assigned_rate_limits": { 00:16:56.121 "rw_ios_per_sec": 0, 00:16:56.121 "rw_mbytes_per_sec": 0, 00:16:56.121 "r_mbytes_per_sec": 0, 00:16:56.121 "w_mbytes_per_sec": 0 00:16:56.121 }, 00:16:56.121 "claimed": false, 00:16:56.121 "zoned": false, 00:16:56.121 "supported_io_types": { 00:16:56.121 "read": true, 00:16:56.121 "write": true, 00:16:56.121 "unmap": true, 00:16:56.121 "flush": true, 00:16:56.121 "reset": true, 00:16:56.121 "nvme_admin": false, 00:16:56.121 "nvme_io": false, 00:16:56.121 "nvme_io_md": false, 00:16:56.121 "write_zeroes": true, 00:16:56.121 "zcopy": true, 00:16:56.121 "get_zone_info": false, 00:16:56.121 "zone_management": false, 00:16:56.121 "zone_append": false, 00:16:56.121 "compare": false, 00:16:56.121 "compare_and_write": false, 00:16:56.121 "abort": true, 00:16:56.121 "seek_hole": false, 00:16:56.121 "seek_data": false, 00:16:56.121 "copy": true, 00:16:56.121 "nvme_iov_md": false 00:16:56.121 }, 00:16:56.121 "memory_domains": [ 00:16:56.121 { 00:16:56.121 "dma_device_id": "system", 00:16:56.121 "dma_device_type": 1 00:16:56.121 }, 00:16:56.121 { 00:16:56.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.121 "dma_device_type": 2 00:16:56.121 } 00:16:56.121 ], 00:16:56.121 "driver_specific": {} 00:16:56.121 } 00:16:56.121 ] 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.121 BaseBdev3 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.121 [ 00:16:56.121 { 00:16:56.121 "name": "BaseBdev3", 00:16:56.121 "aliases": [ 00:16:56.121 "65ba71d9-618e-42d6-9241-4205bb0a9412" 00:16:56.121 ], 00:16:56.121 "product_name": "Malloc disk", 00:16:56.121 "block_size": 512, 00:16:56.121 "num_blocks": 65536, 00:16:56.121 "uuid": "65ba71d9-618e-42d6-9241-4205bb0a9412", 00:16:56.121 "assigned_rate_limits": { 00:16:56.121 "rw_ios_per_sec": 0, 00:16:56.121 "rw_mbytes_per_sec": 0, 00:16:56.121 "r_mbytes_per_sec": 0, 00:16:56.121 "w_mbytes_per_sec": 0 00:16:56.121 }, 00:16:56.121 "claimed": false, 00:16:56.121 "zoned": false, 00:16:56.121 "supported_io_types": { 00:16:56.121 "read": true, 00:16:56.121 "write": true, 00:16:56.121 "unmap": true, 00:16:56.121 "flush": true, 00:16:56.121 "reset": true, 00:16:56.121 "nvme_admin": false, 00:16:56.121 "nvme_io": false, 00:16:56.121 "nvme_io_md": false, 00:16:56.121 "write_zeroes": true, 00:16:56.121 "zcopy": true, 00:16:56.121 "get_zone_info": false, 00:16:56.121 "zone_management": false, 00:16:56.121 "zone_append": false, 00:16:56.121 "compare": false, 00:16:56.121 "compare_and_write": false, 00:16:56.121 "abort": true, 00:16:56.121 "seek_hole": false, 00:16:56.121 "seek_data": false, 00:16:56.121 "copy": true, 00:16:56.121 "nvme_iov_md": false 00:16:56.121 }, 00:16:56.121 "memory_domains": [ 00:16:56.121 { 00:16:56.121 "dma_device_id": "system", 00:16:56.121 "dma_device_type": 1 00:16:56.121 }, 00:16:56.121 { 00:16:56.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.121 "dma_device_type": 2 00:16:56.121 } 00:16:56.121 ], 00:16:56.121 "driver_specific": {} 00:16:56.121 } 00:16:56.121 ] 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.121 [2024-11-27 04:33:52.630046] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:56.121 [2024-11-27 04:33:52.630195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:56.121 [2024-11-27 04:33:52.630250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:56.121 [2024-11-27 04:33:52.632329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.121 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.122 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.122 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.122 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.122 "name": "Existed_Raid", 00:16:56.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.122 "strip_size_kb": 64, 00:16:56.122 "state": "configuring", 00:16:56.122 "raid_level": "raid5f", 00:16:56.122 "superblock": false, 00:16:56.122 "num_base_bdevs": 3, 00:16:56.122 "num_base_bdevs_discovered": 2, 00:16:56.122 "num_base_bdevs_operational": 3, 00:16:56.122 "base_bdevs_list": [ 00:16:56.122 { 00:16:56.122 "name": "BaseBdev1", 00:16:56.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.122 "is_configured": false, 00:16:56.122 "data_offset": 0, 00:16:56.122 "data_size": 0 00:16:56.122 }, 00:16:56.122 { 00:16:56.122 "name": "BaseBdev2", 00:16:56.122 "uuid": "844c838c-1490-4a64-a1c9-876ffd16661d", 00:16:56.122 "is_configured": true, 00:16:56.122 "data_offset": 0, 00:16:56.122 "data_size": 65536 00:16:56.122 }, 00:16:56.122 { 00:16:56.122 "name": "BaseBdev3", 00:16:56.122 "uuid": "65ba71d9-618e-42d6-9241-4205bb0a9412", 00:16:56.122 "is_configured": true, 00:16:56.122 "data_offset": 0, 00:16:56.122 "data_size": 65536 00:16:56.122 } 00:16:56.122 ] 00:16:56.122 }' 00:16:56.122 04:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.122 04:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.689 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:56.689 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.689 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.689 [2024-11-27 04:33:53.137209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:56.689 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.689 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:56.689 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.689 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:56.689 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.689 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.689 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:56.689 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.689 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.689 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.690 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.690 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.690 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.690 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.690 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.690 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.690 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.690 "name": "Existed_Raid", 00:16:56.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.690 "strip_size_kb": 64, 00:16:56.690 "state": "configuring", 00:16:56.690 "raid_level": "raid5f", 00:16:56.690 "superblock": false, 00:16:56.690 "num_base_bdevs": 3, 00:16:56.690 "num_base_bdevs_discovered": 1, 00:16:56.690 "num_base_bdevs_operational": 3, 00:16:56.690 "base_bdevs_list": [ 00:16:56.690 { 00:16:56.690 "name": "BaseBdev1", 00:16:56.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.690 "is_configured": false, 00:16:56.690 "data_offset": 0, 00:16:56.690 "data_size": 0 00:16:56.690 }, 00:16:56.690 { 00:16:56.690 "name": null, 00:16:56.690 "uuid": "844c838c-1490-4a64-a1c9-876ffd16661d", 00:16:56.690 "is_configured": false, 00:16:56.690 "data_offset": 0, 00:16:56.690 "data_size": 65536 00:16:56.690 }, 00:16:56.690 { 00:16:56.690 "name": "BaseBdev3", 00:16:56.690 "uuid": "65ba71d9-618e-42d6-9241-4205bb0a9412", 00:16:56.690 "is_configured": true, 00:16:56.690 "data_offset": 0, 00:16:56.690 "data_size": 65536 00:16:56.690 } 00:16:56.690 ] 00:16:56.690 }' 00:16:56.690 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.690 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.259 [2024-11-27 04:33:53.674763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:57.259 BaseBdev1 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.259 [ 00:16:57.259 { 00:16:57.259 "name": "BaseBdev1", 00:16:57.259 "aliases": [ 00:16:57.259 "bf744152-6f8a-4bab-8eb7-ebdefb6cf71a" 00:16:57.259 ], 00:16:57.259 "product_name": "Malloc disk", 00:16:57.259 "block_size": 512, 00:16:57.259 "num_blocks": 65536, 00:16:57.259 "uuid": "bf744152-6f8a-4bab-8eb7-ebdefb6cf71a", 00:16:57.259 "assigned_rate_limits": { 00:16:57.259 "rw_ios_per_sec": 0, 00:16:57.259 "rw_mbytes_per_sec": 0, 00:16:57.259 "r_mbytes_per_sec": 0, 00:16:57.259 "w_mbytes_per_sec": 0 00:16:57.259 }, 00:16:57.259 "claimed": true, 00:16:57.259 "claim_type": "exclusive_write", 00:16:57.259 "zoned": false, 00:16:57.259 "supported_io_types": { 00:16:57.259 "read": true, 00:16:57.259 "write": true, 00:16:57.259 "unmap": true, 00:16:57.259 "flush": true, 00:16:57.259 "reset": true, 00:16:57.259 "nvme_admin": false, 00:16:57.259 "nvme_io": false, 00:16:57.259 "nvme_io_md": false, 00:16:57.259 "write_zeroes": true, 00:16:57.259 "zcopy": true, 00:16:57.259 "get_zone_info": false, 00:16:57.259 "zone_management": false, 00:16:57.259 "zone_append": false, 00:16:57.259 "compare": false, 00:16:57.259 "compare_and_write": false, 00:16:57.259 "abort": true, 00:16:57.259 "seek_hole": false, 00:16:57.259 "seek_data": false, 00:16:57.259 "copy": true, 00:16:57.259 "nvme_iov_md": false 00:16:57.259 }, 00:16:57.259 "memory_domains": [ 00:16:57.259 { 00:16:57.259 "dma_device_id": "system", 00:16:57.259 "dma_device_type": 1 00:16:57.259 }, 00:16:57.259 { 00:16:57.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.259 "dma_device_type": 2 00:16:57.259 } 00:16:57.259 ], 00:16:57.259 "driver_specific": {} 00:16:57.259 } 00:16:57.259 ] 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.259 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.260 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.260 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:57.260 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.260 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.260 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.260 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.260 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.260 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.260 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.260 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.260 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.260 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.260 "name": "Existed_Raid", 00:16:57.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.260 "strip_size_kb": 64, 00:16:57.260 "state": "configuring", 00:16:57.260 "raid_level": "raid5f", 00:16:57.260 "superblock": false, 00:16:57.260 "num_base_bdevs": 3, 00:16:57.260 "num_base_bdevs_discovered": 2, 00:16:57.260 "num_base_bdevs_operational": 3, 00:16:57.260 "base_bdevs_list": [ 00:16:57.260 { 00:16:57.260 "name": "BaseBdev1", 00:16:57.260 "uuid": "bf744152-6f8a-4bab-8eb7-ebdefb6cf71a", 00:16:57.260 "is_configured": true, 00:16:57.260 "data_offset": 0, 00:16:57.260 "data_size": 65536 00:16:57.260 }, 00:16:57.260 { 00:16:57.260 "name": null, 00:16:57.260 "uuid": "844c838c-1490-4a64-a1c9-876ffd16661d", 00:16:57.260 "is_configured": false, 00:16:57.260 "data_offset": 0, 00:16:57.260 "data_size": 65536 00:16:57.260 }, 00:16:57.260 { 00:16:57.260 "name": "BaseBdev3", 00:16:57.260 "uuid": "65ba71d9-618e-42d6-9241-4205bb0a9412", 00:16:57.260 "is_configured": true, 00:16:57.260 "data_offset": 0, 00:16:57.260 "data_size": 65536 00:16:57.260 } 00:16:57.260 ] 00:16:57.260 }' 00:16:57.260 04:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.260 04:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.830 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.830 04:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.830 04:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.830 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:57.830 04:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.830 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:57.830 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:57.830 04:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.830 04:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.830 [2024-11-27 04:33:54.229908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:57.830 04:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.830 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:57.830 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.830 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.830 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.830 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.830 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:57.830 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.831 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.831 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.831 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.831 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.831 04:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.831 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.831 04:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.831 04:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.831 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.831 "name": "Existed_Raid", 00:16:57.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.831 "strip_size_kb": 64, 00:16:57.831 "state": "configuring", 00:16:57.831 "raid_level": "raid5f", 00:16:57.831 "superblock": false, 00:16:57.831 "num_base_bdevs": 3, 00:16:57.831 "num_base_bdevs_discovered": 1, 00:16:57.831 "num_base_bdevs_operational": 3, 00:16:57.831 "base_bdevs_list": [ 00:16:57.831 { 00:16:57.831 "name": "BaseBdev1", 00:16:57.831 "uuid": "bf744152-6f8a-4bab-8eb7-ebdefb6cf71a", 00:16:57.831 "is_configured": true, 00:16:57.831 "data_offset": 0, 00:16:57.831 "data_size": 65536 00:16:57.831 }, 00:16:57.831 { 00:16:57.831 "name": null, 00:16:57.831 "uuid": "844c838c-1490-4a64-a1c9-876ffd16661d", 00:16:57.831 "is_configured": false, 00:16:57.831 "data_offset": 0, 00:16:57.831 "data_size": 65536 00:16:57.831 }, 00:16:57.831 { 00:16:57.831 "name": null, 00:16:57.831 "uuid": "65ba71d9-618e-42d6-9241-4205bb0a9412", 00:16:57.831 "is_configured": false, 00:16:57.831 "data_offset": 0, 00:16:57.831 "data_size": 65536 00:16:57.831 } 00:16:57.831 ] 00:16:57.831 }' 00:16:57.831 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.831 04:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.091 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.091 04:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.091 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:58.091 04:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.091 04:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.350 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:58.351 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:58.351 04:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.351 04:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.351 [2024-11-27 04:33:54.681198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:58.351 04:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.351 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:58.351 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.351 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.351 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.351 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.351 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:58.351 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.351 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.351 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.351 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.351 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.351 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.351 04:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.351 04:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.351 04:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.351 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.351 "name": "Existed_Raid", 00:16:58.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.351 "strip_size_kb": 64, 00:16:58.351 "state": "configuring", 00:16:58.351 "raid_level": "raid5f", 00:16:58.351 "superblock": false, 00:16:58.351 "num_base_bdevs": 3, 00:16:58.351 "num_base_bdevs_discovered": 2, 00:16:58.351 "num_base_bdevs_operational": 3, 00:16:58.351 "base_bdevs_list": [ 00:16:58.351 { 00:16:58.351 "name": "BaseBdev1", 00:16:58.351 "uuid": "bf744152-6f8a-4bab-8eb7-ebdefb6cf71a", 00:16:58.351 "is_configured": true, 00:16:58.351 "data_offset": 0, 00:16:58.351 "data_size": 65536 00:16:58.351 }, 00:16:58.351 { 00:16:58.351 "name": null, 00:16:58.351 "uuid": "844c838c-1490-4a64-a1c9-876ffd16661d", 00:16:58.351 "is_configured": false, 00:16:58.351 "data_offset": 0, 00:16:58.351 "data_size": 65536 00:16:58.351 }, 00:16:58.351 { 00:16:58.351 "name": "BaseBdev3", 00:16:58.351 "uuid": "65ba71d9-618e-42d6-9241-4205bb0a9412", 00:16:58.351 "is_configured": true, 00:16:58.351 "data_offset": 0, 00:16:58.351 "data_size": 65536 00:16:58.351 } 00:16:58.351 ] 00:16:58.351 }' 00:16:58.351 04:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.351 04:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.610 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.610 04:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.610 04:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.610 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:58.610 04:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.610 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:58.610 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:58.610 04:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.610 04:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.610 [2024-11-27 04:33:55.176359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:58.870 04:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.870 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:58.870 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.870 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.870 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.871 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.871 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:58.871 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.871 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.871 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.871 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.871 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.871 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.871 04:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.871 04:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.871 04:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.871 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.871 "name": "Existed_Raid", 00:16:58.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.871 "strip_size_kb": 64, 00:16:58.871 "state": "configuring", 00:16:58.871 "raid_level": "raid5f", 00:16:58.871 "superblock": false, 00:16:58.871 "num_base_bdevs": 3, 00:16:58.871 "num_base_bdevs_discovered": 1, 00:16:58.871 "num_base_bdevs_operational": 3, 00:16:58.871 "base_bdevs_list": [ 00:16:58.871 { 00:16:58.871 "name": null, 00:16:58.871 "uuid": "bf744152-6f8a-4bab-8eb7-ebdefb6cf71a", 00:16:58.871 "is_configured": false, 00:16:58.871 "data_offset": 0, 00:16:58.871 "data_size": 65536 00:16:58.871 }, 00:16:58.871 { 00:16:58.871 "name": null, 00:16:58.871 "uuid": "844c838c-1490-4a64-a1c9-876ffd16661d", 00:16:58.871 "is_configured": false, 00:16:58.871 "data_offset": 0, 00:16:58.871 "data_size": 65536 00:16:58.871 }, 00:16:58.871 { 00:16:58.871 "name": "BaseBdev3", 00:16:58.871 "uuid": "65ba71d9-618e-42d6-9241-4205bb0a9412", 00:16:58.871 "is_configured": true, 00:16:58.871 "data_offset": 0, 00:16:58.871 "data_size": 65536 00:16:58.871 } 00:16:58.871 ] 00:16:58.871 }' 00:16:58.871 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.871 04:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.131 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.131 04:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.131 04:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.131 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:59.392 04:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.392 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:59.392 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:59.392 04:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.392 04:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.392 [2024-11-27 04:33:55.759811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:59.392 04:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.392 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:59.392 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:59.392 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:59.392 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.392 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.392 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:59.392 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.392 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.392 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.392 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.392 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.392 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.392 04:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.392 04:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.392 04:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.392 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.392 "name": "Existed_Raid", 00:16:59.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.392 "strip_size_kb": 64, 00:16:59.392 "state": "configuring", 00:16:59.392 "raid_level": "raid5f", 00:16:59.393 "superblock": false, 00:16:59.393 "num_base_bdevs": 3, 00:16:59.393 "num_base_bdevs_discovered": 2, 00:16:59.393 "num_base_bdevs_operational": 3, 00:16:59.393 "base_bdevs_list": [ 00:16:59.393 { 00:16:59.393 "name": null, 00:16:59.393 "uuid": "bf744152-6f8a-4bab-8eb7-ebdefb6cf71a", 00:16:59.393 "is_configured": false, 00:16:59.393 "data_offset": 0, 00:16:59.393 "data_size": 65536 00:16:59.393 }, 00:16:59.393 { 00:16:59.393 "name": "BaseBdev2", 00:16:59.393 "uuid": "844c838c-1490-4a64-a1c9-876ffd16661d", 00:16:59.393 "is_configured": true, 00:16:59.393 "data_offset": 0, 00:16:59.393 "data_size": 65536 00:16:59.393 }, 00:16:59.393 { 00:16:59.393 "name": "BaseBdev3", 00:16:59.393 "uuid": "65ba71d9-618e-42d6-9241-4205bb0a9412", 00:16:59.393 "is_configured": true, 00:16:59.393 "data_offset": 0, 00:16:59.393 "data_size": 65536 00:16:59.393 } 00:16:59.393 ] 00:16:59.393 }' 00:16:59.393 04:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.393 04:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.652 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.652 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:59.652 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.652 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.652 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.652 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:59.652 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.652 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.652 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.912 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:59.912 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.912 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bf744152-6f8a-4bab-8eb7-ebdefb6cf71a 00:16:59.912 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.912 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.912 [2024-11-27 04:33:56.309400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:59.912 [2024-11-27 04:33:56.309451] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:59.912 [2024-11-27 04:33:56.309461] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:59.912 [2024-11-27 04:33:56.309700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:59.912 [2024-11-27 04:33:56.315580] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:59.912 [2024-11-27 04:33:56.315601] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:59.912 [2024-11-27 04:33:56.315887] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.912 NewBaseBdev 00:16:59.912 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.912 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:59.912 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:59.912 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:59.912 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:59.912 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:59.912 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:59.912 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:59.912 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.912 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.912 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.912 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:59.912 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.912 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.912 [ 00:16:59.912 { 00:16:59.912 "name": "NewBaseBdev", 00:16:59.912 "aliases": [ 00:16:59.912 "bf744152-6f8a-4bab-8eb7-ebdefb6cf71a" 00:16:59.912 ], 00:16:59.912 "product_name": "Malloc disk", 00:16:59.912 "block_size": 512, 00:16:59.912 "num_blocks": 65536, 00:16:59.912 "uuid": "bf744152-6f8a-4bab-8eb7-ebdefb6cf71a", 00:16:59.912 "assigned_rate_limits": { 00:16:59.912 "rw_ios_per_sec": 0, 00:16:59.912 "rw_mbytes_per_sec": 0, 00:16:59.912 "r_mbytes_per_sec": 0, 00:16:59.912 "w_mbytes_per_sec": 0 00:16:59.912 }, 00:16:59.912 "claimed": true, 00:16:59.912 "claim_type": "exclusive_write", 00:16:59.912 "zoned": false, 00:16:59.912 "supported_io_types": { 00:16:59.912 "read": true, 00:16:59.912 "write": true, 00:16:59.912 "unmap": true, 00:16:59.912 "flush": true, 00:16:59.912 "reset": true, 00:16:59.912 "nvme_admin": false, 00:16:59.912 "nvme_io": false, 00:16:59.912 "nvme_io_md": false, 00:16:59.912 "write_zeroes": true, 00:16:59.912 "zcopy": true, 00:16:59.912 "get_zone_info": false, 00:16:59.912 "zone_management": false, 00:16:59.912 "zone_append": false, 00:16:59.912 "compare": false, 00:16:59.912 "compare_and_write": false, 00:16:59.912 "abort": true, 00:16:59.912 "seek_hole": false, 00:16:59.912 "seek_data": false, 00:16:59.912 "copy": true, 00:16:59.912 "nvme_iov_md": false 00:16:59.912 }, 00:16:59.912 "memory_domains": [ 00:16:59.912 { 00:16:59.912 "dma_device_id": "system", 00:16:59.912 "dma_device_type": 1 00:16:59.912 }, 00:16:59.912 { 00:16:59.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.913 "dma_device_type": 2 00:16:59.913 } 00:16:59.913 ], 00:16:59.913 "driver_specific": {} 00:16:59.913 } 00:16:59.913 ] 00:16:59.913 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.913 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:59.913 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:59.913 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:59.913 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.913 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.913 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.913 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:59.913 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.913 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.913 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.913 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.913 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.913 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.913 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.913 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.913 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.913 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.913 "name": "Existed_Raid", 00:16:59.913 "uuid": "3b0ae60b-0273-4917-9b9b-0eacaa5b6000", 00:16:59.913 "strip_size_kb": 64, 00:16:59.913 "state": "online", 00:16:59.913 "raid_level": "raid5f", 00:16:59.913 "superblock": false, 00:16:59.913 "num_base_bdevs": 3, 00:16:59.913 "num_base_bdevs_discovered": 3, 00:16:59.913 "num_base_bdevs_operational": 3, 00:16:59.913 "base_bdevs_list": [ 00:16:59.913 { 00:16:59.913 "name": "NewBaseBdev", 00:16:59.913 "uuid": "bf744152-6f8a-4bab-8eb7-ebdefb6cf71a", 00:16:59.913 "is_configured": true, 00:16:59.913 "data_offset": 0, 00:16:59.913 "data_size": 65536 00:16:59.913 }, 00:16:59.913 { 00:16:59.913 "name": "BaseBdev2", 00:16:59.913 "uuid": "844c838c-1490-4a64-a1c9-876ffd16661d", 00:16:59.913 "is_configured": true, 00:16:59.913 "data_offset": 0, 00:16:59.913 "data_size": 65536 00:16:59.913 }, 00:16:59.913 { 00:16:59.913 "name": "BaseBdev3", 00:16:59.913 "uuid": "65ba71d9-618e-42d6-9241-4205bb0a9412", 00:16:59.913 "is_configured": true, 00:16:59.913 "data_offset": 0, 00:16:59.913 "data_size": 65536 00:16:59.913 } 00:16:59.913 ] 00:16:59.913 }' 00:16:59.913 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.913 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.483 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:00.483 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:00.483 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:00.483 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:00.483 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:00.483 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:00.483 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:00.483 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:00.483 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.483 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.483 [2024-11-27 04:33:56.826137] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.483 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.483 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:00.483 "name": "Existed_Raid", 00:17:00.483 "aliases": [ 00:17:00.483 "3b0ae60b-0273-4917-9b9b-0eacaa5b6000" 00:17:00.483 ], 00:17:00.483 "product_name": "Raid Volume", 00:17:00.483 "block_size": 512, 00:17:00.483 "num_blocks": 131072, 00:17:00.483 "uuid": "3b0ae60b-0273-4917-9b9b-0eacaa5b6000", 00:17:00.483 "assigned_rate_limits": { 00:17:00.483 "rw_ios_per_sec": 0, 00:17:00.483 "rw_mbytes_per_sec": 0, 00:17:00.483 "r_mbytes_per_sec": 0, 00:17:00.483 "w_mbytes_per_sec": 0 00:17:00.483 }, 00:17:00.483 "claimed": false, 00:17:00.483 "zoned": false, 00:17:00.483 "supported_io_types": { 00:17:00.483 "read": true, 00:17:00.483 "write": true, 00:17:00.483 "unmap": false, 00:17:00.483 "flush": false, 00:17:00.483 "reset": true, 00:17:00.483 "nvme_admin": false, 00:17:00.483 "nvme_io": false, 00:17:00.483 "nvme_io_md": false, 00:17:00.483 "write_zeroes": true, 00:17:00.483 "zcopy": false, 00:17:00.483 "get_zone_info": false, 00:17:00.483 "zone_management": false, 00:17:00.483 "zone_append": false, 00:17:00.483 "compare": false, 00:17:00.483 "compare_and_write": false, 00:17:00.483 "abort": false, 00:17:00.483 "seek_hole": false, 00:17:00.483 "seek_data": false, 00:17:00.483 "copy": false, 00:17:00.483 "nvme_iov_md": false 00:17:00.483 }, 00:17:00.483 "driver_specific": { 00:17:00.483 "raid": { 00:17:00.483 "uuid": "3b0ae60b-0273-4917-9b9b-0eacaa5b6000", 00:17:00.483 "strip_size_kb": 64, 00:17:00.483 "state": "online", 00:17:00.483 "raid_level": "raid5f", 00:17:00.483 "superblock": false, 00:17:00.483 "num_base_bdevs": 3, 00:17:00.483 "num_base_bdevs_discovered": 3, 00:17:00.483 "num_base_bdevs_operational": 3, 00:17:00.483 "base_bdevs_list": [ 00:17:00.483 { 00:17:00.483 "name": "NewBaseBdev", 00:17:00.483 "uuid": "bf744152-6f8a-4bab-8eb7-ebdefb6cf71a", 00:17:00.483 "is_configured": true, 00:17:00.483 "data_offset": 0, 00:17:00.483 "data_size": 65536 00:17:00.483 }, 00:17:00.483 { 00:17:00.483 "name": "BaseBdev2", 00:17:00.483 "uuid": "844c838c-1490-4a64-a1c9-876ffd16661d", 00:17:00.483 "is_configured": true, 00:17:00.483 "data_offset": 0, 00:17:00.483 "data_size": 65536 00:17:00.483 }, 00:17:00.483 { 00:17:00.483 "name": "BaseBdev3", 00:17:00.483 "uuid": "65ba71d9-618e-42d6-9241-4205bb0a9412", 00:17:00.483 "is_configured": true, 00:17:00.483 "data_offset": 0, 00:17:00.483 "data_size": 65536 00:17:00.483 } 00:17:00.483 ] 00:17:00.483 } 00:17:00.483 } 00:17:00.483 }' 00:17:00.483 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:00.483 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:00.483 BaseBdev2 00:17:00.483 BaseBdev3' 00:17:00.483 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.483 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:00.483 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:00.483 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:00.483 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.483 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.483 04:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.483 04:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.483 04:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:00.483 04:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:00.483 04:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:00.483 04:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.483 04:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:00.483 04:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.483 04:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.484 04:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.743 04:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:00.743 04:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:00.743 04:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:00.743 04:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:00.743 04:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.744 04:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.744 04:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.744 04:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.744 04:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:00.744 04:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:00.744 04:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:00.744 04:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.744 04:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.744 [2024-11-27 04:33:57.133374] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:00.744 [2024-11-27 04:33:57.133406] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:00.744 [2024-11-27 04:33:57.133497] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.744 [2024-11-27 04:33:57.133818] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.744 [2024-11-27 04:33:57.133841] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:00.744 04:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.744 04:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80207 00:17:00.744 04:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80207 ']' 00:17:00.744 04:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80207 00:17:00.744 04:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:17:00.744 04:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.744 04:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80207 00:17:00.744 04:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.744 04:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.744 killing process with pid 80207 00:17:00.744 04:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80207' 00:17:00.744 04:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80207 00:17:00.744 [2024-11-27 04:33:57.182908] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:00.744 04:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80207 00:17:01.003 [2024-11-27 04:33:57.522957] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:02.379 ************************************ 00:17:02.379 END TEST raid5f_state_function_test 00:17:02.379 ************************************ 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:02.379 00:17:02.379 real 0m10.969s 00:17:02.379 user 0m17.382s 00:17:02.379 sys 0m1.964s 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.379 04:33:58 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:17:02.379 04:33:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:02.379 04:33:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:02.379 04:33:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:02.379 ************************************ 00:17:02.379 START TEST raid5f_state_function_test_sb 00:17:02.379 ************************************ 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:02.379 Process raid pid: 80834 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80834 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80834' 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80834 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80834 ']' 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.379 04:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.380 [2024-11-27 04:33:58.882173] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:02.380 [2024-11-27 04:33:58.882398] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.638 [2024-11-27 04:33:59.074296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.638 [2024-11-27 04:33:59.199660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.898 [2024-11-27 04:33:59.421813] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.898 [2024-11-27 04:33:59.421857] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:03.465 04:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.465 04:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:03.465 04:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:03.465 04:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.465 04:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.465 [2024-11-27 04:33:59.747059] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:03.465 [2024-11-27 04:33:59.747146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:03.465 [2024-11-27 04:33:59.747160] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:03.465 [2024-11-27 04:33:59.747171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:03.465 [2024-11-27 04:33:59.747184] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:03.465 [2024-11-27 04:33:59.747194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:03.465 04:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.465 04:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:03.465 04:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:03.465 04:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:03.465 04:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.465 04:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.465 04:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:03.465 04:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.465 04:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.465 04:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.465 04:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.465 04:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.465 04:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.465 04:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.465 04:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.466 04:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.466 04:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.466 "name": "Existed_Raid", 00:17:03.466 "uuid": "d836c001-4d7b-4401-ae0b-59a6b2cc3c5c", 00:17:03.466 "strip_size_kb": 64, 00:17:03.466 "state": "configuring", 00:17:03.466 "raid_level": "raid5f", 00:17:03.466 "superblock": true, 00:17:03.466 "num_base_bdevs": 3, 00:17:03.466 "num_base_bdevs_discovered": 0, 00:17:03.466 "num_base_bdevs_operational": 3, 00:17:03.466 "base_bdevs_list": [ 00:17:03.466 { 00:17:03.466 "name": "BaseBdev1", 00:17:03.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.466 "is_configured": false, 00:17:03.466 "data_offset": 0, 00:17:03.466 "data_size": 0 00:17:03.466 }, 00:17:03.466 { 00:17:03.466 "name": "BaseBdev2", 00:17:03.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.466 "is_configured": false, 00:17:03.466 "data_offset": 0, 00:17:03.466 "data_size": 0 00:17:03.466 }, 00:17:03.466 { 00:17:03.466 "name": "BaseBdev3", 00:17:03.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.466 "is_configured": false, 00:17:03.466 "data_offset": 0, 00:17:03.466 "data_size": 0 00:17:03.466 } 00:17:03.466 ] 00:17:03.466 }' 00:17:03.466 04:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.466 04:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.724 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:03.724 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.724 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.724 [2024-11-27 04:34:00.238161] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:03.724 [2024-11-27 04:34:00.238253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:03.724 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.724 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:03.724 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.724 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.724 [2024-11-27 04:34:00.250178] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:03.724 [2024-11-27 04:34:00.250288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:03.724 [2024-11-27 04:34:00.250327] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:03.724 [2024-11-27 04:34:00.250366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:03.724 [2024-11-27 04:34:00.250396] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:03.724 [2024-11-27 04:34:00.250424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:03.724 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.724 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:03.725 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.725 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.725 [2024-11-27 04:34:00.296325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:03.725 BaseBdev1 00:17:03.725 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.725 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:03.725 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:03.725 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:03.725 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:03.725 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:03.725 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:03.725 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:03.725 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.725 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.983 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.983 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:03.983 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.983 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.983 [ 00:17:03.983 { 00:17:03.983 "name": "BaseBdev1", 00:17:03.983 "aliases": [ 00:17:03.983 "497c4ef0-7a44-4e47-82fe-a3f9c1ce79f4" 00:17:03.983 ], 00:17:03.983 "product_name": "Malloc disk", 00:17:03.983 "block_size": 512, 00:17:03.983 "num_blocks": 65536, 00:17:03.983 "uuid": "497c4ef0-7a44-4e47-82fe-a3f9c1ce79f4", 00:17:03.983 "assigned_rate_limits": { 00:17:03.983 "rw_ios_per_sec": 0, 00:17:03.983 "rw_mbytes_per_sec": 0, 00:17:03.983 "r_mbytes_per_sec": 0, 00:17:03.983 "w_mbytes_per_sec": 0 00:17:03.983 }, 00:17:03.983 "claimed": true, 00:17:03.983 "claim_type": "exclusive_write", 00:17:03.983 "zoned": false, 00:17:03.983 "supported_io_types": { 00:17:03.983 "read": true, 00:17:03.983 "write": true, 00:17:03.983 "unmap": true, 00:17:03.983 "flush": true, 00:17:03.983 "reset": true, 00:17:03.983 "nvme_admin": false, 00:17:03.983 "nvme_io": false, 00:17:03.983 "nvme_io_md": false, 00:17:03.983 "write_zeroes": true, 00:17:03.983 "zcopy": true, 00:17:03.983 "get_zone_info": false, 00:17:03.983 "zone_management": false, 00:17:03.983 "zone_append": false, 00:17:03.983 "compare": false, 00:17:03.983 "compare_and_write": false, 00:17:03.983 "abort": true, 00:17:03.983 "seek_hole": false, 00:17:03.983 "seek_data": false, 00:17:03.983 "copy": true, 00:17:03.983 "nvme_iov_md": false 00:17:03.983 }, 00:17:03.983 "memory_domains": [ 00:17:03.983 { 00:17:03.983 "dma_device_id": "system", 00:17:03.983 "dma_device_type": 1 00:17:03.983 }, 00:17:03.983 { 00:17:03.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.983 "dma_device_type": 2 00:17:03.983 } 00:17:03.983 ], 00:17:03.983 "driver_specific": {} 00:17:03.983 } 00:17:03.983 ] 00:17:03.983 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.983 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:03.983 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:03.983 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:03.983 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:03.983 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.983 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.983 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:03.983 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.983 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.983 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.983 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.983 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.983 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.983 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.983 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.983 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.983 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.983 "name": "Existed_Raid", 00:17:03.983 "uuid": "18e5f114-5336-48c3-ad52-09b539364324", 00:17:03.983 "strip_size_kb": 64, 00:17:03.983 "state": "configuring", 00:17:03.983 "raid_level": "raid5f", 00:17:03.983 "superblock": true, 00:17:03.983 "num_base_bdevs": 3, 00:17:03.983 "num_base_bdevs_discovered": 1, 00:17:03.983 "num_base_bdevs_operational": 3, 00:17:03.983 "base_bdevs_list": [ 00:17:03.983 { 00:17:03.983 "name": "BaseBdev1", 00:17:03.983 "uuid": "497c4ef0-7a44-4e47-82fe-a3f9c1ce79f4", 00:17:03.984 "is_configured": true, 00:17:03.984 "data_offset": 2048, 00:17:03.984 "data_size": 63488 00:17:03.984 }, 00:17:03.984 { 00:17:03.984 "name": "BaseBdev2", 00:17:03.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.984 "is_configured": false, 00:17:03.984 "data_offset": 0, 00:17:03.984 "data_size": 0 00:17:03.984 }, 00:17:03.984 { 00:17:03.984 "name": "BaseBdev3", 00:17:03.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.984 "is_configured": false, 00:17:03.984 "data_offset": 0, 00:17:03.984 "data_size": 0 00:17:03.984 } 00:17:03.984 ] 00:17:03.984 }' 00:17:03.984 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.984 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.242 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:04.242 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.242 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.242 [2024-11-27 04:34:00.779608] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:04.242 [2024-11-27 04:34:00.779739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:04.242 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.242 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:04.242 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.242 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.242 [2024-11-27 04:34:00.787658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:04.242 [2024-11-27 04:34:00.789699] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:04.242 [2024-11-27 04:34:00.789750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:04.242 [2024-11-27 04:34:00.789761] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:04.242 [2024-11-27 04:34:00.789771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:04.242 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.242 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:04.242 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:04.242 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:04.242 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:04.242 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:04.242 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:04.242 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:04.242 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:04.242 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.242 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.242 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.242 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.242 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.242 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.242 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.242 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.242 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.501 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.501 "name": "Existed_Raid", 00:17:04.501 "uuid": "acd57be0-61f7-42ea-a47c-b466ed746f11", 00:17:04.501 "strip_size_kb": 64, 00:17:04.501 "state": "configuring", 00:17:04.501 "raid_level": "raid5f", 00:17:04.501 "superblock": true, 00:17:04.501 "num_base_bdevs": 3, 00:17:04.501 "num_base_bdevs_discovered": 1, 00:17:04.501 "num_base_bdevs_operational": 3, 00:17:04.501 "base_bdevs_list": [ 00:17:04.501 { 00:17:04.501 "name": "BaseBdev1", 00:17:04.501 "uuid": "497c4ef0-7a44-4e47-82fe-a3f9c1ce79f4", 00:17:04.501 "is_configured": true, 00:17:04.501 "data_offset": 2048, 00:17:04.501 "data_size": 63488 00:17:04.501 }, 00:17:04.501 { 00:17:04.501 "name": "BaseBdev2", 00:17:04.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.501 "is_configured": false, 00:17:04.501 "data_offset": 0, 00:17:04.501 "data_size": 0 00:17:04.501 }, 00:17:04.501 { 00:17:04.501 "name": "BaseBdev3", 00:17:04.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.501 "is_configured": false, 00:17:04.501 "data_offset": 0, 00:17:04.501 "data_size": 0 00:17:04.501 } 00:17:04.501 ] 00:17:04.501 }' 00:17:04.501 04:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.501 04:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.761 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:04.761 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.761 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.761 [2024-11-27 04:34:01.288474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:04.761 BaseBdev2 00:17:04.761 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.761 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:04.761 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:04.761 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:04.761 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:04.761 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:04.761 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:04.761 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:04.761 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.761 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.761 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.761 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:04.761 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.761 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.761 [ 00:17:04.761 { 00:17:04.761 "name": "BaseBdev2", 00:17:04.761 "aliases": [ 00:17:04.761 "04dbd04c-0aba-45d6-8dea-5576067b05be" 00:17:04.761 ], 00:17:04.762 "product_name": "Malloc disk", 00:17:04.762 "block_size": 512, 00:17:04.762 "num_blocks": 65536, 00:17:04.762 "uuid": "04dbd04c-0aba-45d6-8dea-5576067b05be", 00:17:04.762 "assigned_rate_limits": { 00:17:04.762 "rw_ios_per_sec": 0, 00:17:04.762 "rw_mbytes_per_sec": 0, 00:17:04.762 "r_mbytes_per_sec": 0, 00:17:04.762 "w_mbytes_per_sec": 0 00:17:04.762 }, 00:17:04.762 "claimed": true, 00:17:04.762 "claim_type": "exclusive_write", 00:17:04.762 "zoned": false, 00:17:04.762 "supported_io_types": { 00:17:04.762 "read": true, 00:17:04.762 "write": true, 00:17:04.762 "unmap": true, 00:17:04.762 "flush": true, 00:17:04.762 "reset": true, 00:17:04.762 "nvme_admin": false, 00:17:04.762 "nvme_io": false, 00:17:04.762 "nvme_io_md": false, 00:17:04.762 "write_zeroes": true, 00:17:04.762 "zcopy": true, 00:17:04.762 "get_zone_info": false, 00:17:04.762 "zone_management": false, 00:17:04.762 "zone_append": false, 00:17:04.762 "compare": false, 00:17:04.762 "compare_and_write": false, 00:17:04.762 "abort": true, 00:17:04.762 "seek_hole": false, 00:17:04.762 "seek_data": false, 00:17:04.762 "copy": true, 00:17:04.762 "nvme_iov_md": false 00:17:04.762 }, 00:17:04.762 "memory_domains": [ 00:17:04.762 { 00:17:04.762 "dma_device_id": "system", 00:17:04.762 "dma_device_type": 1 00:17:04.762 }, 00:17:04.762 { 00:17:04.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.762 "dma_device_type": 2 00:17:04.762 } 00:17:04.762 ], 00:17:04.762 "driver_specific": {} 00:17:04.762 } 00:17:04.762 ] 00:17:04.762 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.762 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:04.762 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:04.762 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:04.762 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:04.762 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:04.762 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:04.762 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:04.762 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:04.762 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:04.762 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.762 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.762 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.762 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.762 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.762 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.762 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.762 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.020 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.020 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.020 "name": "Existed_Raid", 00:17:05.020 "uuid": "acd57be0-61f7-42ea-a47c-b466ed746f11", 00:17:05.020 "strip_size_kb": 64, 00:17:05.020 "state": "configuring", 00:17:05.020 "raid_level": "raid5f", 00:17:05.020 "superblock": true, 00:17:05.020 "num_base_bdevs": 3, 00:17:05.020 "num_base_bdevs_discovered": 2, 00:17:05.020 "num_base_bdevs_operational": 3, 00:17:05.020 "base_bdevs_list": [ 00:17:05.020 { 00:17:05.020 "name": "BaseBdev1", 00:17:05.020 "uuid": "497c4ef0-7a44-4e47-82fe-a3f9c1ce79f4", 00:17:05.020 "is_configured": true, 00:17:05.020 "data_offset": 2048, 00:17:05.020 "data_size": 63488 00:17:05.020 }, 00:17:05.020 { 00:17:05.020 "name": "BaseBdev2", 00:17:05.020 "uuid": "04dbd04c-0aba-45d6-8dea-5576067b05be", 00:17:05.020 "is_configured": true, 00:17:05.020 "data_offset": 2048, 00:17:05.020 "data_size": 63488 00:17:05.020 }, 00:17:05.020 { 00:17:05.020 "name": "BaseBdev3", 00:17:05.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.020 "is_configured": false, 00:17:05.020 "data_offset": 0, 00:17:05.020 "data_size": 0 00:17:05.020 } 00:17:05.020 ] 00:17:05.020 }' 00:17:05.020 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.020 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.279 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:05.279 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.279 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.279 [2024-11-27 04:34:01.833291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:05.279 [2024-11-27 04:34:01.833685] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:05.279 [2024-11-27 04:34:01.833711] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:05.279 [2024-11-27 04:34:01.833979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:05.279 BaseBdev3 00:17:05.279 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.279 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:05.279 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:05.279 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:05.279 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:05.279 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:05.279 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:05.279 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:05.279 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.279 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.279 [2024-11-27 04:34:01.839861] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:05.279 [2024-11-27 04:34:01.839883] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:05.279 [2024-11-27 04:34:01.840046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.279 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.279 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:05.279 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.279 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.538 [ 00:17:05.538 { 00:17:05.538 "name": "BaseBdev3", 00:17:05.538 "aliases": [ 00:17:05.538 "37b25488-b141-4a11-962e-733f8ac3306e" 00:17:05.538 ], 00:17:05.538 "product_name": "Malloc disk", 00:17:05.538 "block_size": 512, 00:17:05.538 "num_blocks": 65536, 00:17:05.538 "uuid": "37b25488-b141-4a11-962e-733f8ac3306e", 00:17:05.538 "assigned_rate_limits": { 00:17:05.538 "rw_ios_per_sec": 0, 00:17:05.538 "rw_mbytes_per_sec": 0, 00:17:05.538 "r_mbytes_per_sec": 0, 00:17:05.538 "w_mbytes_per_sec": 0 00:17:05.538 }, 00:17:05.538 "claimed": true, 00:17:05.539 "claim_type": "exclusive_write", 00:17:05.539 "zoned": false, 00:17:05.539 "supported_io_types": { 00:17:05.539 "read": true, 00:17:05.539 "write": true, 00:17:05.539 "unmap": true, 00:17:05.539 "flush": true, 00:17:05.539 "reset": true, 00:17:05.539 "nvme_admin": false, 00:17:05.539 "nvme_io": false, 00:17:05.539 "nvme_io_md": false, 00:17:05.539 "write_zeroes": true, 00:17:05.539 "zcopy": true, 00:17:05.539 "get_zone_info": false, 00:17:05.539 "zone_management": false, 00:17:05.539 "zone_append": false, 00:17:05.539 "compare": false, 00:17:05.539 "compare_and_write": false, 00:17:05.539 "abort": true, 00:17:05.539 "seek_hole": false, 00:17:05.539 "seek_data": false, 00:17:05.539 "copy": true, 00:17:05.539 "nvme_iov_md": false 00:17:05.539 }, 00:17:05.539 "memory_domains": [ 00:17:05.539 { 00:17:05.539 "dma_device_id": "system", 00:17:05.539 "dma_device_type": 1 00:17:05.539 }, 00:17:05.539 { 00:17:05.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.539 "dma_device_type": 2 00:17:05.539 } 00:17:05.539 ], 00:17:05.539 "driver_specific": {} 00:17:05.539 } 00:17:05.539 ] 00:17:05.539 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.539 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:05.539 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:05.539 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:05.539 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:05.539 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:05.539 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.539 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.539 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.539 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:05.539 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.539 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.539 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.539 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.539 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.539 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.539 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.539 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.539 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.539 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.539 "name": "Existed_Raid", 00:17:05.539 "uuid": "acd57be0-61f7-42ea-a47c-b466ed746f11", 00:17:05.539 "strip_size_kb": 64, 00:17:05.539 "state": "online", 00:17:05.539 "raid_level": "raid5f", 00:17:05.539 "superblock": true, 00:17:05.539 "num_base_bdevs": 3, 00:17:05.539 "num_base_bdevs_discovered": 3, 00:17:05.539 "num_base_bdevs_operational": 3, 00:17:05.539 "base_bdevs_list": [ 00:17:05.539 { 00:17:05.539 "name": "BaseBdev1", 00:17:05.539 "uuid": "497c4ef0-7a44-4e47-82fe-a3f9c1ce79f4", 00:17:05.539 "is_configured": true, 00:17:05.539 "data_offset": 2048, 00:17:05.539 "data_size": 63488 00:17:05.539 }, 00:17:05.539 { 00:17:05.539 "name": "BaseBdev2", 00:17:05.539 "uuid": "04dbd04c-0aba-45d6-8dea-5576067b05be", 00:17:05.539 "is_configured": true, 00:17:05.539 "data_offset": 2048, 00:17:05.539 "data_size": 63488 00:17:05.539 }, 00:17:05.539 { 00:17:05.539 "name": "BaseBdev3", 00:17:05.539 "uuid": "37b25488-b141-4a11-962e-733f8ac3306e", 00:17:05.539 "is_configured": true, 00:17:05.539 "data_offset": 2048, 00:17:05.539 "data_size": 63488 00:17:05.539 } 00:17:05.539 ] 00:17:05.539 }' 00:17:05.539 04:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.539 04:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.798 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:05.798 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:05.798 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:05.798 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:05.798 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:05.798 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:05.798 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:05.798 04:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.798 04:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.798 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:05.798 [2024-11-27 04:34:02.345902] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:05.798 04:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:06.057 "name": "Existed_Raid", 00:17:06.057 "aliases": [ 00:17:06.057 "acd57be0-61f7-42ea-a47c-b466ed746f11" 00:17:06.057 ], 00:17:06.057 "product_name": "Raid Volume", 00:17:06.057 "block_size": 512, 00:17:06.057 "num_blocks": 126976, 00:17:06.057 "uuid": "acd57be0-61f7-42ea-a47c-b466ed746f11", 00:17:06.057 "assigned_rate_limits": { 00:17:06.057 "rw_ios_per_sec": 0, 00:17:06.057 "rw_mbytes_per_sec": 0, 00:17:06.057 "r_mbytes_per_sec": 0, 00:17:06.057 "w_mbytes_per_sec": 0 00:17:06.057 }, 00:17:06.057 "claimed": false, 00:17:06.057 "zoned": false, 00:17:06.057 "supported_io_types": { 00:17:06.057 "read": true, 00:17:06.057 "write": true, 00:17:06.057 "unmap": false, 00:17:06.057 "flush": false, 00:17:06.057 "reset": true, 00:17:06.057 "nvme_admin": false, 00:17:06.057 "nvme_io": false, 00:17:06.057 "nvme_io_md": false, 00:17:06.057 "write_zeroes": true, 00:17:06.057 "zcopy": false, 00:17:06.057 "get_zone_info": false, 00:17:06.057 "zone_management": false, 00:17:06.057 "zone_append": false, 00:17:06.057 "compare": false, 00:17:06.057 "compare_and_write": false, 00:17:06.057 "abort": false, 00:17:06.057 "seek_hole": false, 00:17:06.057 "seek_data": false, 00:17:06.057 "copy": false, 00:17:06.057 "nvme_iov_md": false 00:17:06.057 }, 00:17:06.057 "driver_specific": { 00:17:06.057 "raid": { 00:17:06.057 "uuid": "acd57be0-61f7-42ea-a47c-b466ed746f11", 00:17:06.057 "strip_size_kb": 64, 00:17:06.057 "state": "online", 00:17:06.057 "raid_level": "raid5f", 00:17:06.057 "superblock": true, 00:17:06.057 "num_base_bdevs": 3, 00:17:06.057 "num_base_bdevs_discovered": 3, 00:17:06.057 "num_base_bdevs_operational": 3, 00:17:06.057 "base_bdevs_list": [ 00:17:06.057 { 00:17:06.057 "name": "BaseBdev1", 00:17:06.057 "uuid": "497c4ef0-7a44-4e47-82fe-a3f9c1ce79f4", 00:17:06.057 "is_configured": true, 00:17:06.057 "data_offset": 2048, 00:17:06.057 "data_size": 63488 00:17:06.057 }, 00:17:06.057 { 00:17:06.057 "name": "BaseBdev2", 00:17:06.057 "uuid": "04dbd04c-0aba-45d6-8dea-5576067b05be", 00:17:06.057 "is_configured": true, 00:17:06.057 "data_offset": 2048, 00:17:06.057 "data_size": 63488 00:17:06.057 }, 00:17:06.057 { 00:17:06.057 "name": "BaseBdev3", 00:17:06.057 "uuid": "37b25488-b141-4a11-962e-733f8ac3306e", 00:17:06.057 "is_configured": true, 00:17:06.057 "data_offset": 2048, 00:17:06.057 "data_size": 63488 00:17:06.057 } 00:17:06.057 ] 00:17:06.057 } 00:17:06.057 } 00:17:06.057 }' 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:06.057 BaseBdev2 00:17:06.057 BaseBdev3' 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.057 04:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.316 [2024-11-27 04:34:02.641300] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:06.316 04:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.316 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:06.316 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:06.316 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:06.316 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:06.316 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:06.316 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:17:06.316 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.316 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.316 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.316 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.317 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:06.317 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.317 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.317 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.317 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.317 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.317 04:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.317 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.317 04:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.317 04:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.317 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.317 "name": "Existed_Raid", 00:17:06.317 "uuid": "acd57be0-61f7-42ea-a47c-b466ed746f11", 00:17:06.317 "strip_size_kb": 64, 00:17:06.317 "state": "online", 00:17:06.317 "raid_level": "raid5f", 00:17:06.317 "superblock": true, 00:17:06.317 "num_base_bdevs": 3, 00:17:06.317 "num_base_bdevs_discovered": 2, 00:17:06.317 "num_base_bdevs_operational": 2, 00:17:06.317 "base_bdevs_list": [ 00:17:06.317 { 00:17:06.317 "name": null, 00:17:06.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.317 "is_configured": false, 00:17:06.317 "data_offset": 0, 00:17:06.317 "data_size": 63488 00:17:06.317 }, 00:17:06.317 { 00:17:06.317 "name": "BaseBdev2", 00:17:06.317 "uuid": "04dbd04c-0aba-45d6-8dea-5576067b05be", 00:17:06.317 "is_configured": true, 00:17:06.317 "data_offset": 2048, 00:17:06.317 "data_size": 63488 00:17:06.317 }, 00:17:06.317 { 00:17:06.317 "name": "BaseBdev3", 00:17:06.317 "uuid": "37b25488-b141-4a11-962e-733f8ac3306e", 00:17:06.317 "is_configured": true, 00:17:06.317 "data_offset": 2048, 00:17:06.317 "data_size": 63488 00:17:06.317 } 00:17:06.317 ] 00:17:06.317 }' 00:17:06.317 04:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.317 04:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.924 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:06.924 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:06.924 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.924 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.924 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.924 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:06.924 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.924 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:06.924 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:06.924 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:06.924 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.924 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.924 [2024-11-27 04:34:03.256674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:06.924 [2024-11-27 04:34:03.256913] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:06.924 [2024-11-27 04:34:03.369296] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:06.924 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.924 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:06.924 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:06.925 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.925 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.925 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:06.925 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.925 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.925 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:06.925 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:06.925 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:06.925 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.925 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.925 [2024-11-27 04:34:03.429257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:06.925 [2024-11-27 04:34:03.429314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:07.184 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.184 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:07.184 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:07.184 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.184 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:07.184 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.184 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.184 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.184 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:07.184 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:07.184 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:17:07.184 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:07.184 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:07.184 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:07.184 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.184 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.184 BaseBdev2 00:17:07.184 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.184 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:07.184 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:07.184 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.185 [ 00:17:07.185 { 00:17:07.185 "name": "BaseBdev2", 00:17:07.185 "aliases": [ 00:17:07.185 "eef9c7c1-412b-4f56-a7ba-ac586f13b428" 00:17:07.185 ], 00:17:07.185 "product_name": "Malloc disk", 00:17:07.185 "block_size": 512, 00:17:07.185 "num_blocks": 65536, 00:17:07.185 "uuid": "eef9c7c1-412b-4f56-a7ba-ac586f13b428", 00:17:07.185 "assigned_rate_limits": { 00:17:07.185 "rw_ios_per_sec": 0, 00:17:07.185 "rw_mbytes_per_sec": 0, 00:17:07.185 "r_mbytes_per_sec": 0, 00:17:07.185 "w_mbytes_per_sec": 0 00:17:07.185 }, 00:17:07.185 "claimed": false, 00:17:07.185 "zoned": false, 00:17:07.185 "supported_io_types": { 00:17:07.185 "read": true, 00:17:07.185 "write": true, 00:17:07.185 "unmap": true, 00:17:07.185 "flush": true, 00:17:07.185 "reset": true, 00:17:07.185 "nvme_admin": false, 00:17:07.185 "nvme_io": false, 00:17:07.185 "nvme_io_md": false, 00:17:07.185 "write_zeroes": true, 00:17:07.185 "zcopy": true, 00:17:07.185 "get_zone_info": false, 00:17:07.185 "zone_management": false, 00:17:07.185 "zone_append": false, 00:17:07.185 "compare": false, 00:17:07.185 "compare_and_write": false, 00:17:07.185 "abort": true, 00:17:07.185 "seek_hole": false, 00:17:07.185 "seek_data": false, 00:17:07.185 "copy": true, 00:17:07.185 "nvme_iov_md": false 00:17:07.185 }, 00:17:07.185 "memory_domains": [ 00:17:07.185 { 00:17:07.185 "dma_device_id": "system", 00:17:07.185 "dma_device_type": 1 00:17:07.185 }, 00:17:07.185 { 00:17:07.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.185 "dma_device_type": 2 00:17:07.185 } 00:17:07.185 ], 00:17:07.185 "driver_specific": {} 00:17:07.185 } 00:17:07.185 ] 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.185 BaseBdev3 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.185 [ 00:17:07.185 { 00:17:07.185 "name": "BaseBdev3", 00:17:07.185 "aliases": [ 00:17:07.185 "8507ef3f-4afe-4868-8ac6-e4abd759e48c" 00:17:07.185 ], 00:17:07.185 "product_name": "Malloc disk", 00:17:07.185 "block_size": 512, 00:17:07.185 "num_blocks": 65536, 00:17:07.185 "uuid": "8507ef3f-4afe-4868-8ac6-e4abd759e48c", 00:17:07.185 "assigned_rate_limits": { 00:17:07.185 "rw_ios_per_sec": 0, 00:17:07.185 "rw_mbytes_per_sec": 0, 00:17:07.185 "r_mbytes_per_sec": 0, 00:17:07.185 "w_mbytes_per_sec": 0 00:17:07.185 }, 00:17:07.185 "claimed": false, 00:17:07.185 "zoned": false, 00:17:07.185 "supported_io_types": { 00:17:07.185 "read": true, 00:17:07.185 "write": true, 00:17:07.185 "unmap": true, 00:17:07.185 "flush": true, 00:17:07.185 "reset": true, 00:17:07.185 "nvme_admin": false, 00:17:07.185 "nvme_io": false, 00:17:07.185 "nvme_io_md": false, 00:17:07.185 "write_zeroes": true, 00:17:07.185 "zcopy": true, 00:17:07.185 "get_zone_info": false, 00:17:07.185 "zone_management": false, 00:17:07.185 "zone_append": false, 00:17:07.185 "compare": false, 00:17:07.185 "compare_and_write": false, 00:17:07.185 "abort": true, 00:17:07.185 "seek_hole": false, 00:17:07.185 "seek_data": false, 00:17:07.185 "copy": true, 00:17:07.185 "nvme_iov_md": false 00:17:07.185 }, 00:17:07.185 "memory_domains": [ 00:17:07.185 { 00:17:07.185 "dma_device_id": "system", 00:17:07.185 "dma_device_type": 1 00:17:07.185 }, 00:17:07.185 { 00:17:07.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.185 "dma_device_type": 2 00:17:07.185 } 00:17:07.185 ], 00:17:07.185 "driver_specific": {} 00:17:07.185 } 00:17:07.185 ] 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.185 [2024-11-27 04:34:03.758290] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:07.185 [2024-11-27 04:34:03.758382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:07.185 [2024-11-27 04:34:03.758449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:07.185 [2024-11-27 04:34:03.760314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.185 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.445 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.445 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.445 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.445 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.445 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.445 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.445 "name": "Existed_Raid", 00:17:07.445 "uuid": "ffff7cd1-7735-4226-8dd7-a8241416e071", 00:17:07.445 "strip_size_kb": 64, 00:17:07.445 "state": "configuring", 00:17:07.445 "raid_level": "raid5f", 00:17:07.445 "superblock": true, 00:17:07.445 "num_base_bdevs": 3, 00:17:07.445 "num_base_bdevs_discovered": 2, 00:17:07.445 "num_base_bdevs_operational": 3, 00:17:07.445 "base_bdevs_list": [ 00:17:07.445 { 00:17:07.445 "name": "BaseBdev1", 00:17:07.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.445 "is_configured": false, 00:17:07.445 "data_offset": 0, 00:17:07.445 "data_size": 0 00:17:07.445 }, 00:17:07.445 { 00:17:07.445 "name": "BaseBdev2", 00:17:07.445 "uuid": "eef9c7c1-412b-4f56-a7ba-ac586f13b428", 00:17:07.445 "is_configured": true, 00:17:07.445 "data_offset": 2048, 00:17:07.445 "data_size": 63488 00:17:07.445 }, 00:17:07.445 { 00:17:07.445 "name": "BaseBdev3", 00:17:07.445 "uuid": "8507ef3f-4afe-4868-8ac6-e4abd759e48c", 00:17:07.445 "is_configured": true, 00:17:07.445 "data_offset": 2048, 00:17:07.445 "data_size": 63488 00:17:07.445 } 00:17:07.445 ] 00:17:07.445 }' 00:17:07.445 04:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.445 04:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.704 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:07.704 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.704 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.704 [2024-11-27 04:34:04.209575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:07.704 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.704 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:07.704 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.704 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.704 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.704 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.704 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:07.704 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.704 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.704 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.704 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.704 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.704 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.704 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.704 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.704 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.704 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.704 "name": "Existed_Raid", 00:17:07.704 "uuid": "ffff7cd1-7735-4226-8dd7-a8241416e071", 00:17:07.704 "strip_size_kb": 64, 00:17:07.704 "state": "configuring", 00:17:07.704 "raid_level": "raid5f", 00:17:07.704 "superblock": true, 00:17:07.704 "num_base_bdevs": 3, 00:17:07.704 "num_base_bdevs_discovered": 1, 00:17:07.704 "num_base_bdevs_operational": 3, 00:17:07.704 "base_bdevs_list": [ 00:17:07.704 { 00:17:07.704 "name": "BaseBdev1", 00:17:07.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.704 "is_configured": false, 00:17:07.704 "data_offset": 0, 00:17:07.704 "data_size": 0 00:17:07.704 }, 00:17:07.704 { 00:17:07.704 "name": null, 00:17:07.704 "uuid": "eef9c7c1-412b-4f56-a7ba-ac586f13b428", 00:17:07.704 "is_configured": false, 00:17:07.704 "data_offset": 0, 00:17:07.704 "data_size": 63488 00:17:07.704 }, 00:17:07.704 { 00:17:07.704 "name": "BaseBdev3", 00:17:07.704 "uuid": "8507ef3f-4afe-4868-8ac6-e4abd759e48c", 00:17:07.704 "is_configured": true, 00:17:07.704 "data_offset": 2048, 00:17:07.704 "data_size": 63488 00:17:07.704 } 00:17:07.704 ] 00:17:07.704 }' 00:17:07.704 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.704 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.271 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.271 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.271 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.271 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:08.271 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.271 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:08.271 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:08.271 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.271 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.271 [2024-11-27 04:34:04.786384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:08.271 BaseBdev1 00:17:08.271 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.271 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:08.271 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.272 [ 00:17:08.272 { 00:17:08.272 "name": "BaseBdev1", 00:17:08.272 "aliases": [ 00:17:08.272 "4f07a8d8-8b2d-43b4-a863-b63e054ef8c7" 00:17:08.272 ], 00:17:08.272 "product_name": "Malloc disk", 00:17:08.272 "block_size": 512, 00:17:08.272 "num_blocks": 65536, 00:17:08.272 "uuid": "4f07a8d8-8b2d-43b4-a863-b63e054ef8c7", 00:17:08.272 "assigned_rate_limits": { 00:17:08.272 "rw_ios_per_sec": 0, 00:17:08.272 "rw_mbytes_per_sec": 0, 00:17:08.272 "r_mbytes_per_sec": 0, 00:17:08.272 "w_mbytes_per_sec": 0 00:17:08.272 }, 00:17:08.272 "claimed": true, 00:17:08.272 "claim_type": "exclusive_write", 00:17:08.272 "zoned": false, 00:17:08.272 "supported_io_types": { 00:17:08.272 "read": true, 00:17:08.272 "write": true, 00:17:08.272 "unmap": true, 00:17:08.272 "flush": true, 00:17:08.272 "reset": true, 00:17:08.272 "nvme_admin": false, 00:17:08.272 "nvme_io": false, 00:17:08.272 "nvme_io_md": false, 00:17:08.272 "write_zeroes": true, 00:17:08.272 "zcopy": true, 00:17:08.272 "get_zone_info": false, 00:17:08.272 "zone_management": false, 00:17:08.272 "zone_append": false, 00:17:08.272 "compare": false, 00:17:08.272 "compare_and_write": false, 00:17:08.272 "abort": true, 00:17:08.272 "seek_hole": false, 00:17:08.272 "seek_data": false, 00:17:08.272 "copy": true, 00:17:08.272 "nvme_iov_md": false 00:17:08.272 }, 00:17:08.272 "memory_domains": [ 00:17:08.272 { 00:17:08.272 "dma_device_id": "system", 00:17:08.272 "dma_device_type": 1 00:17:08.272 }, 00:17:08.272 { 00:17:08.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.272 "dma_device_type": 2 00:17:08.272 } 00:17:08.272 ], 00:17:08.272 "driver_specific": {} 00:17:08.272 } 00:17:08.272 ] 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.272 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.531 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.531 "name": "Existed_Raid", 00:17:08.531 "uuid": "ffff7cd1-7735-4226-8dd7-a8241416e071", 00:17:08.531 "strip_size_kb": 64, 00:17:08.531 "state": "configuring", 00:17:08.531 "raid_level": "raid5f", 00:17:08.531 "superblock": true, 00:17:08.531 "num_base_bdevs": 3, 00:17:08.531 "num_base_bdevs_discovered": 2, 00:17:08.531 "num_base_bdevs_operational": 3, 00:17:08.531 "base_bdevs_list": [ 00:17:08.531 { 00:17:08.531 "name": "BaseBdev1", 00:17:08.531 "uuid": "4f07a8d8-8b2d-43b4-a863-b63e054ef8c7", 00:17:08.531 "is_configured": true, 00:17:08.531 "data_offset": 2048, 00:17:08.531 "data_size": 63488 00:17:08.531 }, 00:17:08.531 { 00:17:08.531 "name": null, 00:17:08.531 "uuid": "eef9c7c1-412b-4f56-a7ba-ac586f13b428", 00:17:08.531 "is_configured": false, 00:17:08.531 "data_offset": 0, 00:17:08.531 "data_size": 63488 00:17:08.531 }, 00:17:08.531 { 00:17:08.531 "name": "BaseBdev3", 00:17:08.531 "uuid": "8507ef3f-4afe-4868-8ac6-e4abd759e48c", 00:17:08.531 "is_configured": true, 00:17:08.531 "data_offset": 2048, 00:17:08.531 "data_size": 63488 00:17:08.531 } 00:17:08.531 ] 00:17:08.531 }' 00:17:08.531 04:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.531 04:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.789 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.789 04:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.789 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:08.789 04:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.789 04:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.789 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:08.789 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:08.789 04:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.789 04:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.789 [2024-11-27 04:34:05.333534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:08.789 04:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.789 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:08.789 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:08.789 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:08.789 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.789 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.789 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:08.789 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.789 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.789 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.789 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.789 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.789 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.789 04:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.789 04:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.789 04:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.049 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.049 "name": "Existed_Raid", 00:17:09.049 "uuid": "ffff7cd1-7735-4226-8dd7-a8241416e071", 00:17:09.049 "strip_size_kb": 64, 00:17:09.049 "state": "configuring", 00:17:09.049 "raid_level": "raid5f", 00:17:09.049 "superblock": true, 00:17:09.049 "num_base_bdevs": 3, 00:17:09.049 "num_base_bdevs_discovered": 1, 00:17:09.049 "num_base_bdevs_operational": 3, 00:17:09.049 "base_bdevs_list": [ 00:17:09.049 { 00:17:09.049 "name": "BaseBdev1", 00:17:09.049 "uuid": "4f07a8d8-8b2d-43b4-a863-b63e054ef8c7", 00:17:09.049 "is_configured": true, 00:17:09.049 "data_offset": 2048, 00:17:09.049 "data_size": 63488 00:17:09.049 }, 00:17:09.049 { 00:17:09.049 "name": null, 00:17:09.049 "uuid": "eef9c7c1-412b-4f56-a7ba-ac586f13b428", 00:17:09.049 "is_configured": false, 00:17:09.049 "data_offset": 0, 00:17:09.049 "data_size": 63488 00:17:09.049 }, 00:17:09.049 { 00:17:09.049 "name": null, 00:17:09.049 "uuid": "8507ef3f-4afe-4868-8ac6-e4abd759e48c", 00:17:09.049 "is_configured": false, 00:17:09.049 "data_offset": 0, 00:17:09.049 "data_size": 63488 00:17:09.049 } 00:17:09.049 ] 00:17:09.049 }' 00:17:09.049 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.049 04:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.308 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.308 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:09.308 04:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.308 04:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.308 04:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.308 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:09.308 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:09.308 04:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.308 04:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.308 [2024-11-27 04:34:05.860709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:09.308 04:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.308 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:09.308 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:09.308 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:09.308 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.308 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.308 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.308 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.308 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.308 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.308 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.308 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.308 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.308 04:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.308 04:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.308 04:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.567 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.567 "name": "Existed_Raid", 00:17:09.567 "uuid": "ffff7cd1-7735-4226-8dd7-a8241416e071", 00:17:09.567 "strip_size_kb": 64, 00:17:09.567 "state": "configuring", 00:17:09.567 "raid_level": "raid5f", 00:17:09.567 "superblock": true, 00:17:09.567 "num_base_bdevs": 3, 00:17:09.567 "num_base_bdevs_discovered": 2, 00:17:09.567 "num_base_bdevs_operational": 3, 00:17:09.567 "base_bdevs_list": [ 00:17:09.567 { 00:17:09.567 "name": "BaseBdev1", 00:17:09.567 "uuid": "4f07a8d8-8b2d-43b4-a863-b63e054ef8c7", 00:17:09.567 "is_configured": true, 00:17:09.567 "data_offset": 2048, 00:17:09.567 "data_size": 63488 00:17:09.567 }, 00:17:09.567 { 00:17:09.567 "name": null, 00:17:09.567 "uuid": "eef9c7c1-412b-4f56-a7ba-ac586f13b428", 00:17:09.567 "is_configured": false, 00:17:09.567 "data_offset": 0, 00:17:09.567 "data_size": 63488 00:17:09.567 }, 00:17:09.567 { 00:17:09.567 "name": "BaseBdev3", 00:17:09.567 "uuid": "8507ef3f-4afe-4868-8ac6-e4abd759e48c", 00:17:09.567 "is_configured": true, 00:17:09.567 "data_offset": 2048, 00:17:09.567 "data_size": 63488 00:17:09.567 } 00:17:09.567 ] 00:17:09.567 }' 00:17:09.567 04:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.567 04:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.826 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:09.827 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.827 04:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.827 04:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.827 04:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.827 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:09.827 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:09.827 04:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.827 04:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.827 [2024-11-27 04:34:06.299950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:09.827 04:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.827 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:09.827 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:09.827 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:09.827 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.827 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.827 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.827 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.827 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.827 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.827 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.827 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.827 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.827 04:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.827 04:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.086 04:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.086 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.086 "name": "Existed_Raid", 00:17:10.086 "uuid": "ffff7cd1-7735-4226-8dd7-a8241416e071", 00:17:10.086 "strip_size_kb": 64, 00:17:10.086 "state": "configuring", 00:17:10.086 "raid_level": "raid5f", 00:17:10.086 "superblock": true, 00:17:10.086 "num_base_bdevs": 3, 00:17:10.086 "num_base_bdevs_discovered": 1, 00:17:10.086 "num_base_bdevs_operational": 3, 00:17:10.086 "base_bdevs_list": [ 00:17:10.086 { 00:17:10.086 "name": null, 00:17:10.086 "uuid": "4f07a8d8-8b2d-43b4-a863-b63e054ef8c7", 00:17:10.086 "is_configured": false, 00:17:10.086 "data_offset": 0, 00:17:10.086 "data_size": 63488 00:17:10.086 }, 00:17:10.086 { 00:17:10.086 "name": null, 00:17:10.086 "uuid": "eef9c7c1-412b-4f56-a7ba-ac586f13b428", 00:17:10.086 "is_configured": false, 00:17:10.086 "data_offset": 0, 00:17:10.086 "data_size": 63488 00:17:10.086 }, 00:17:10.086 { 00:17:10.086 "name": "BaseBdev3", 00:17:10.086 "uuid": "8507ef3f-4afe-4868-8ac6-e4abd759e48c", 00:17:10.086 "is_configured": true, 00:17:10.086 "data_offset": 2048, 00:17:10.086 "data_size": 63488 00:17:10.086 } 00:17:10.086 ] 00:17:10.086 }' 00:17:10.086 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.086 04:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.345 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.346 04:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.346 04:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.346 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:10.346 04:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.604 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:10.604 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:10.604 04:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.604 04:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.604 [2024-11-27 04:34:06.938966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:10.605 04:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.605 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:10.605 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:10.605 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:10.605 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.605 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.605 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:10.605 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.605 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.605 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.605 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.605 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.605 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.605 04:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.605 04:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.605 04:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.605 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.605 "name": "Existed_Raid", 00:17:10.605 "uuid": "ffff7cd1-7735-4226-8dd7-a8241416e071", 00:17:10.605 "strip_size_kb": 64, 00:17:10.605 "state": "configuring", 00:17:10.605 "raid_level": "raid5f", 00:17:10.605 "superblock": true, 00:17:10.605 "num_base_bdevs": 3, 00:17:10.605 "num_base_bdevs_discovered": 2, 00:17:10.605 "num_base_bdevs_operational": 3, 00:17:10.605 "base_bdevs_list": [ 00:17:10.605 { 00:17:10.605 "name": null, 00:17:10.605 "uuid": "4f07a8d8-8b2d-43b4-a863-b63e054ef8c7", 00:17:10.605 "is_configured": false, 00:17:10.605 "data_offset": 0, 00:17:10.605 "data_size": 63488 00:17:10.605 }, 00:17:10.605 { 00:17:10.605 "name": "BaseBdev2", 00:17:10.605 "uuid": "eef9c7c1-412b-4f56-a7ba-ac586f13b428", 00:17:10.605 "is_configured": true, 00:17:10.605 "data_offset": 2048, 00:17:10.605 "data_size": 63488 00:17:10.605 }, 00:17:10.605 { 00:17:10.605 "name": "BaseBdev3", 00:17:10.605 "uuid": "8507ef3f-4afe-4868-8ac6-e4abd759e48c", 00:17:10.605 "is_configured": true, 00:17:10.605 "data_offset": 2048, 00:17:10.605 "data_size": 63488 00:17:10.605 } 00:17:10.605 ] 00:17:10.605 }' 00:17:10.605 04:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.605 04:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.872 04:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.872 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.872 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.872 04:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:10.872 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.131 04:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:11.131 04:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:11.131 04:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.131 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.131 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.131 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.131 04:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4f07a8d8-8b2d-43b4-a863-b63e054ef8c7 00:17:11.131 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.131 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.131 [2024-11-27 04:34:07.565420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:11.131 [2024-11-27 04:34:07.565699] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:11.131 [2024-11-27 04:34:07.565717] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:11.131 [2024-11-27 04:34:07.565987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:11.131 NewBaseBdev 00:17:11.131 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.131 04:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:11.131 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:11.131 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:11.131 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:11.131 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:11.131 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:11.131 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:11.131 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.132 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.132 [2024-11-27 04:34:07.572040] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:11.132 [2024-11-27 04:34:07.572065] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:11.132 [2024-11-27 04:34:07.572371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.132 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.132 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:11.132 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.132 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.132 [ 00:17:11.132 { 00:17:11.132 "name": "NewBaseBdev", 00:17:11.132 "aliases": [ 00:17:11.132 "4f07a8d8-8b2d-43b4-a863-b63e054ef8c7" 00:17:11.132 ], 00:17:11.132 "product_name": "Malloc disk", 00:17:11.132 "block_size": 512, 00:17:11.132 "num_blocks": 65536, 00:17:11.132 "uuid": "4f07a8d8-8b2d-43b4-a863-b63e054ef8c7", 00:17:11.132 "assigned_rate_limits": { 00:17:11.132 "rw_ios_per_sec": 0, 00:17:11.132 "rw_mbytes_per_sec": 0, 00:17:11.132 "r_mbytes_per_sec": 0, 00:17:11.132 "w_mbytes_per_sec": 0 00:17:11.132 }, 00:17:11.132 "claimed": true, 00:17:11.132 "claim_type": "exclusive_write", 00:17:11.132 "zoned": false, 00:17:11.132 "supported_io_types": { 00:17:11.132 "read": true, 00:17:11.132 "write": true, 00:17:11.132 "unmap": true, 00:17:11.132 "flush": true, 00:17:11.132 "reset": true, 00:17:11.132 "nvme_admin": false, 00:17:11.132 "nvme_io": false, 00:17:11.132 "nvme_io_md": false, 00:17:11.132 "write_zeroes": true, 00:17:11.132 "zcopy": true, 00:17:11.132 "get_zone_info": false, 00:17:11.132 "zone_management": false, 00:17:11.132 "zone_append": false, 00:17:11.132 "compare": false, 00:17:11.132 "compare_and_write": false, 00:17:11.132 "abort": true, 00:17:11.132 "seek_hole": false, 00:17:11.132 "seek_data": false, 00:17:11.132 "copy": true, 00:17:11.132 "nvme_iov_md": false 00:17:11.132 }, 00:17:11.132 "memory_domains": [ 00:17:11.132 { 00:17:11.132 "dma_device_id": "system", 00:17:11.132 "dma_device_type": 1 00:17:11.132 }, 00:17:11.132 { 00:17:11.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.132 "dma_device_type": 2 00:17:11.132 } 00:17:11.132 ], 00:17:11.132 "driver_specific": {} 00:17:11.132 } 00:17:11.132 ] 00:17:11.132 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.132 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:11.132 04:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:11.132 04:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:11.132 04:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.132 04:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:11.132 04:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.132 04:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:11.132 04:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.132 04:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.132 04:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.132 04:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.132 04:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.132 04:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.132 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.132 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.132 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.132 04:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.132 "name": "Existed_Raid", 00:17:11.132 "uuid": "ffff7cd1-7735-4226-8dd7-a8241416e071", 00:17:11.132 "strip_size_kb": 64, 00:17:11.132 "state": "online", 00:17:11.132 "raid_level": "raid5f", 00:17:11.132 "superblock": true, 00:17:11.132 "num_base_bdevs": 3, 00:17:11.132 "num_base_bdevs_discovered": 3, 00:17:11.132 "num_base_bdevs_operational": 3, 00:17:11.132 "base_bdevs_list": [ 00:17:11.132 { 00:17:11.132 "name": "NewBaseBdev", 00:17:11.132 "uuid": "4f07a8d8-8b2d-43b4-a863-b63e054ef8c7", 00:17:11.132 "is_configured": true, 00:17:11.132 "data_offset": 2048, 00:17:11.132 "data_size": 63488 00:17:11.132 }, 00:17:11.132 { 00:17:11.132 "name": "BaseBdev2", 00:17:11.132 "uuid": "eef9c7c1-412b-4f56-a7ba-ac586f13b428", 00:17:11.132 "is_configured": true, 00:17:11.132 "data_offset": 2048, 00:17:11.132 "data_size": 63488 00:17:11.132 }, 00:17:11.132 { 00:17:11.132 "name": "BaseBdev3", 00:17:11.132 "uuid": "8507ef3f-4afe-4868-8ac6-e4abd759e48c", 00:17:11.132 "is_configured": true, 00:17:11.132 "data_offset": 2048, 00:17:11.132 "data_size": 63488 00:17:11.132 } 00:17:11.132 ] 00:17:11.132 }' 00:17:11.132 04:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.132 04:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.701 [2024-11-27 04:34:08.047052] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:11.701 "name": "Existed_Raid", 00:17:11.701 "aliases": [ 00:17:11.701 "ffff7cd1-7735-4226-8dd7-a8241416e071" 00:17:11.701 ], 00:17:11.701 "product_name": "Raid Volume", 00:17:11.701 "block_size": 512, 00:17:11.701 "num_blocks": 126976, 00:17:11.701 "uuid": "ffff7cd1-7735-4226-8dd7-a8241416e071", 00:17:11.701 "assigned_rate_limits": { 00:17:11.701 "rw_ios_per_sec": 0, 00:17:11.701 "rw_mbytes_per_sec": 0, 00:17:11.701 "r_mbytes_per_sec": 0, 00:17:11.701 "w_mbytes_per_sec": 0 00:17:11.701 }, 00:17:11.701 "claimed": false, 00:17:11.701 "zoned": false, 00:17:11.701 "supported_io_types": { 00:17:11.701 "read": true, 00:17:11.701 "write": true, 00:17:11.701 "unmap": false, 00:17:11.701 "flush": false, 00:17:11.701 "reset": true, 00:17:11.701 "nvme_admin": false, 00:17:11.701 "nvme_io": false, 00:17:11.701 "nvme_io_md": false, 00:17:11.701 "write_zeroes": true, 00:17:11.701 "zcopy": false, 00:17:11.701 "get_zone_info": false, 00:17:11.701 "zone_management": false, 00:17:11.701 "zone_append": false, 00:17:11.701 "compare": false, 00:17:11.701 "compare_and_write": false, 00:17:11.701 "abort": false, 00:17:11.701 "seek_hole": false, 00:17:11.701 "seek_data": false, 00:17:11.701 "copy": false, 00:17:11.701 "nvme_iov_md": false 00:17:11.701 }, 00:17:11.701 "driver_specific": { 00:17:11.701 "raid": { 00:17:11.701 "uuid": "ffff7cd1-7735-4226-8dd7-a8241416e071", 00:17:11.701 "strip_size_kb": 64, 00:17:11.701 "state": "online", 00:17:11.701 "raid_level": "raid5f", 00:17:11.701 "superblock": true, 00:17:11.701 "num_base_bdevs": 3, 00:17:11.701 "num_base_bdevs_discovered": 3, 00:17:11.701 "num_base_bdevs_operational": 3, 00:17:11.701 "base_bdevs_list": [ 00:17:11.701 { 00:17:11.701 "name": "NewBaseBdev", 00:17:11.701 "uuid": "4f07a8d8-8b2d-43b4-a863-b63e054ef8c7", 00:17:11.701 "is_configured": true, 00:17:11.701 "data_offset": 2048, 00:17:11.701 "data_size": 63488 00:17:11.701 }, 00:17:11.701 { 00:17:11.701 "name": "BaseBdev2", 00:17:11.701 "uuid": "eef9c7c1-412b-4f56-a7ba-ac586f13b428", 00:17:11.701 "is_configured": true, 00:17:11.701 "data_offset": 2048, 00:17:11.701 "data_size": 63488 00:17:11.701 }, 00:17:11.701 { 00:17:11.701 "name": "BaseBdev3", 00:17:11.701 "uuid": "8507ef3f-4afe-4868-8ac6-e4abd759e48c", 00:17:11.701 "is_configured": true, 00:17:11.701 "data_offset": 2048, 00:17:11.701 "data_size": 63488 00:17:11.701 } 00:17:11.701 ] 00:17:11.701 } 00:17:11.701 } 00:17:11.701 }' 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:11.701 BaseBdev2 00:17:11.701 BaseBdev3' 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.701 04:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.960 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:11.960 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:11.960 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:11.960 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:11.960 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:11.960 04:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.960 04:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.960 04:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.960 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:11.960 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:11.960 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:11.960 04:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.960 04:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.960 [2024-11-27 04:34:08.350329] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:11.960 [2024-11-27 04:34:08.350368] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:11.960 [2024-11-27 04:34:08.350483] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:11.960 [2024-11-27 04:34:08.350822] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:11.960 [2024-11-27 04:34:08.350838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:11.960 04:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.960 04:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80834 00:17:11.960 04:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80834 ']' 00:17:11.960 04:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80834 00:17:11.960 04:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:11.960 04:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:11.960 04:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80834 00:17:11.960 04:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:11.960 killing process with pid 80834 00:17:11.960 04:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:11.960 04:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80834' 00:17:11.960 04:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80834 00:17:11.961 [2024-11-27 04:34:08.404383] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:11.961 04:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80834 00:17:12.220 [2024-11-27 04:34:08.721050] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:13.617 04:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:13.617 00:17:13.617 real 0m11.152s 00:17:13.617 user 0m17.693s 00:17:13.617 sys 0m2.009s 00:17:13.617 04:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:13.617 04:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.617 ************************************ 00:17:13.617 END TEST raid5f_state_function_test_sb 00:17:13.617 ************************************ 00:17:13.617 04:34:09 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:17:13.617 04:34:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:13.617 04:34:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:13.617 04:34:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:13.617 ************************************ 00:17:13.617 START TEST raid5f_superblock_test 00:17:13.617 ************************************ 00:17:13.617 04:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:17:13.617 04:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:13.617 04:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:17:13.617 04:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:13.617 04:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:13.617 04:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:13.617 04:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:13.617 04:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:13.617 04:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:13.617 04:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:13.617 04:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:13.617 04:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:13.617 04:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:13.617 04:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:13.617 04:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:13.617 04:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:13.617 04:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:13.617 04:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81459 00:17:13.617 04:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:13.617 04:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81459 00:17:13.617 04:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81459 ']' 00:17:13.617 04:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.617 04:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:13.617 04:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.617 04:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:13.617 04:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.617 [2024-11-27 04:34:10.089564] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:13.617 [2024-11-27 04:34:10.090273] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81459 ] 00:17:13.876 [2024-11-27 04:34:10.265897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.876 [2024-11-27 04:34:10.395348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.134 [2024-11-27 04:34:10.623263] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:14.134 [2024-11-27 04:34:10.623418] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:14.393 04:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:14.393 04:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:17:14.393 04:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:14.393 04:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:14.393 04:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:14.393 04:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:14.393 04:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:14.393 04:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:14.393 04:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:14.393 04:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:14.393 04:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:14.393 04:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.393 04:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.652 malloc1 00:17:14.652 04:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.652 04:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:14.653 04:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.653 04:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.653 [2024-11-27 04:34:11.000980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:14.653 [2024-11-27 04:34:11.001042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.653 [2024-11-27 04:34:11.001062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:14.653 [2024-11-27 04:34:11.001071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.653 [2024-11-27 04:34:11.003307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.653 [2024-11-27 04:34:11.003345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:14.653 pt1 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.653 malloc2 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.653 [2024-11-27 04:34:11.065545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:14.653 [2024-11-27 04:34:11.065705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.653 [2024-11-27 04:34:11.065760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:14.653 [2024-11-27 04:34:11.065802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.653 [2024-11-27 04:34:11.068440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.653 [2024-11-27 04:34:11.068573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:14.653 pt2 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.653 malloc3 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.653 [2024-11-27 04:34:11.145366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:14.653 [2024-11-27 04:34:11.145495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.653 [2024-11-27 04:34:11.145539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:14.653 [2024-11-27 04:34:11.145581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.653 [2024-11-27 04:34:11.147912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.653 [2024-11-27 04:34:11.147994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:14.653 pt3 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.653 [2024-11-27 04:34:11.161412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:14.653 [2024-11-27 04:34:11.163448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:14.653 [2024-11-27 04:34:11.163521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:14.653 [2024-11-27 04:34:11.163715] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:14.653 [2024-11-27 04:34:11.163739] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:14.653 [2024-11-27 04:34:11.164035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:14.653 [2024-11-27 04:34:11.170740] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:14.653 [2024-11-27 04:34:11.170804] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:14.653 [2024-11-27 04:34:11.171116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.653 "name": "raid_bdev1", 00:17:14.653 "uuid": "6e041cc8-2d52-4dab-8fdd-1c05c29cbd56", 00:17:14.653 "strip_size_kb": 64, 00:17:14.653 "state": "online", 00:17:14.653 "raid_level": "raid5f", 00:17:14.653 "superblock": true, 00:17:14.653 "num_base_bdevs": 3, 00:17:14.653 "num_base_bdevs_discovered": 3, 00:17:14.653 "num_base_bdevs_operational": 3, 00:17:14.653 "base_bdevs_list": [ 00:17:14.653 { 00:17:14.653 "name": "pt1", 00:17:14.653 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:14.653 "is_configured": true, 00:17:14.653 "data_offset": 2048, 00:17:14.653 "data_size": 63488 00:17:14.653 }, 00:17:14.653 { 00:17:14.653 "name": "pt2", 00:17:14.653 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.653 "is_configured": true, 00:17:14.653 "data_offset": 2048, 00:17:14.653 "data_size": 63488 00:17:14.653 }, 00:17:14.653 { 00:17:14.653 "name": "pt3", 00:17:14.653 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:14.653 "is_configured": true, 00:17:14.653 "data_offset": 2048, 00:17:14.653 "data_size": 63488 00:17:14.653 } 00:17:14.653 ] 00:17:14.653 }' 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.653 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.221 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:15.221 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:15.221 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:15.221 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:15.221 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:15.221 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:15.221 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:15.221 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:15.221 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.221 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.221 [2024-11-27 04:34:11.646198] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:15.221 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.221 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:15.221 "name": "raid_bdev1", 00:17:15.221 "aliases": [ 00:17:15.221 "6e041cc8-2d52-4dab-8fdd-1c05c29cbd56" 00:17:15.221 ], 00:17:15.221 "product_name": "Raid Volume", 00:17:15.221 "block_size": 512, 00:17:15.221 "num_blocks": 126976, 00:17:15.221 "uuid": "6e041cc8-2d52-4dab-8fdd-1c05c29cbd56", 00:17:15.221 "assigned_rate_limits": { 00:17:15.221 "rw_ios_per_sec": 0, 00:17:15.221 "rw_mbytes_per_sec": 0, 00:17:15.221 "r_mbytes_per_sec": 0, 00:17:15.221 "w_mbytes_per_sec": 0 00:17:15.221 }, 00:17:15.221 "claimed": false, 00:17:15.221 "zoned": false, 00:17:15.221 "supported_io_types": { 00:17:15.221 "read": true, 00:17:15.221 "write": true, 00:17:15.221 "unmap": false, 00:17:15.221 "flush": false, 00:17:15.221 "reset": true, 00:17:15.221 "nvme_admin": false, 00:17:15.221 "nvme_io": false, 00:17:15.221 "nvme_io_md": false, 00:17:15.222 "write_zeroes": true, 00:17:15.222 "zcopy": false, 00:17:15.222 "get_zone_info": false, 00:17:15.222 "zone_management": false, 00:17:15.222 "zone_append": false, 00:17:15.222 "compare": false, 00:17:15.222 "compare_and_write": false, 00:17:15.222 "abort": false, 00:17:15.222 "seek_hole": false, 00:17:15.222 "seek_data": false, 00:17:15.222 "copy": false, 00:17:15.222 "nvme_iov_md": false 00:17:15.222 }, 00:17:15.222 "driver_specific": { 00:17:15.222 "raid": { 00:17:15.222 "uuid": "6e041cc8-2d52-4dab-8fdd-1c05c29cbd56", 00:17:15.222 "strip_size_kb": 64, 00:17:15.222 "state": "online", 00:17:15.222 "raid_level": "raid5f", 00:17:15.222 "superblock": true, 00:17:15.222 "num_base_bdevs": 3, 00:17:15.222 "num_base_bdevs_discovered": 3, 00:17:15.222 "num_base_bdevs_operational": 3, 00:17:15.222 "base_bdevs_list": [ 00:17:15.222 { 00:17:15.222 "name": "pt1", 00:17:15.222 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:15.222 "is_configured": true, 00:17:15.222 "data_offset": 2048, 00:17:15.222 "data_size": 63488 00:17:15.222 }, 00:17:15.222 { 00:17:15.222 "name": "pt2", 00:17:15.222 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.222 "is_configured": true, 00:17:15.222 "data_offset": 2048, 00:17:15.222 "data_size": 63488 00:17:15.222 }, 00:17:15.222 { 00:17:15.222 "name": "pt3", 00:17:15.222 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:15.222 "is_configured": true, 00:17:15.222 "data_offset": 2048, 00:17:15.222 "data_size": 63488 00:17:15.222 } 00:17:15.222 ] 00:17:15.222 } 00:17:15.222 } 00:17:15.222 }' 00:17:15.222 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:15.222 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:15.222 pt2 00:17:15.222 pt3' 00:17:15.222 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.222 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:15.222 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.222 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:15.222 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.222 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.222 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:15.481 [2024-11-27 04:34:11.949588] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6e041cc8-2d52-4dab-8fdd-1c05c29cbd56 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6e041cc8-2d52-4dab-8fdd-1c05c29cbd56 ']' 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.481 04:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.481 [2024-11-27 04:34:11.997317] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:15.481 [2024-11-27 04:34:11.997349] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:15.481 [2024-11-27 04:34:11.997446] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:15.481 [2024-11-27 04:34:11.997527] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:15.481 [2024-11-27 04:34:11.997539] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:15.481 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.481 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.481 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.481 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:15.481 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.481 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.481 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:15.481 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:15.481 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:15.481 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:15.481 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.481 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.741 [2024-11-27 04:34:12.153114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:15.741 [2024-11-27 04:34:12.155343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:15.741 [2024-11-27 04:34:12.155464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:15.741 [2024-11-27 04:34:12.155560] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:15.741 [2024-11-27 04:34:12.155657] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:15.741 [2024-11-27 04:34:12.155739] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:15.741 [2024-11-27 04:34:12.155802] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:15.741 [2024-11-27 04:34:12.155835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:15.741 request: 00:17:15.741 { 00:17:15.741 "name": "raid_bdev1", 00:17:15.741 "raid_level": "raid5f", 00:17:15.741 "base_bdevs": [ 00:17:15.741 "malloc1", 00:17:15.741 "malloc2", 00:17:15.741 "malloc3" 00:17:15.741 ], 00:17:15.741 "strip_size_kb": 64, 00:17:15.741 "superblock": false, 00:17:15.741 "method": "bdev_raid_create", 00:17:15.741 "req_id": 1 00:17:15.741 } 00:17:15.741 Got JSON-RPC error response 00:17:15.741 response: 00:17:15.741 { 00:17:15.741 "code": -17, 00:17:15.741 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:15.741 } 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.741 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.742 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.742 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:15.742 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:15.742 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:15.742 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.742 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.742 [2024-11-27 04:34:12.204953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:15.742 [2024-11-27 04:34:12.205038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.742 [2024-11-27 04:34:12.205061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:15.742 [2024-11-27 04:34:12.205072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.742 [2024-11-27 04:34:12.207574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.742 [2024-11-27 04:34:12.207662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:15.742 [2024-11-27 04:34:12.207782] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:15.742 [2024-11-27 04:34:12.207849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:15.742 pt1 00:17:15.742 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.742 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:15.742 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.742 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:15.742 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:15.742 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.742 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:15.742 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.742 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.742 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.742 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.742 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.742 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.742 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.742 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.742 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.742 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.742 "name": "raid_bdev1", 00:17:15.742 "uuid": "6e041cc8-2d52-4dab-8fdd-1c05c29cbd56", 00:17:15.742 "strip_size_kb": 64, 00:17:15.742 "state": "configuring", 00:17:15.742 "raid_level": "raid5f", 00:17:15.742 "superblock": true, 00:17:15.742 "num_base_bdevs": 3, 00:17:15.742 "num_base_bdevs_discovered": 1, 00:17:15.742 "num_base_bdevs_operational": 3, 00:17:15.742 "base_bdevs_list": [ 00:17:15.742 { 00:17:15.742 "name": "pt1", 00:17:15.742 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:15.742 "is_configured": true, 00:17:15.742 "data_offset": 2048, 00:17:15.742 "data_size": 63488 00:17:15.742 }, 00:17:15.742 { 00:17:15.742 "name": null, 00:17:15.742 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.742 "is_configured": false, 00:17:15.742 "data_offset": 2048, 00:17:15.742 "data_size": 63488 00:17:15.742 }, 00:17:15.742 { 00:17:15.742 "name": null, 00:17:15.742 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:15.742 "is_configured": false, 00:17:15.742 "data_offset": 2048, 00:17:15.742 "data_size": 63488 00:17:15.742 } 00:17:15.742 ] 00:17:15.742 }' 00:17:15.742 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.742 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.310 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:17:16.310 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:16.310 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.310 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.310 [2024-11-27 04:34:12.676182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:16.310 [2024-11-27 04:34:12.676297] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.310 [2024-11-27 04:34:12.676340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:16.310 [2024-11-27 04:34:12.676381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.310 [2024-11-27 04:34:12.676872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.310 [2024-11-27 04:34:12.676947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:16.310 [2024-11-27 04:34:12.677080] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:16.310 [2024-11-27 04:34:12.677156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:16.310 pt2 00:17:16.310 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.310 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:16.310 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.310 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.310 [2024-11-27 04:34:12.688151] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:16.310 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.310 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:16.310 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.310 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:16.310 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:16.310 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:16.310 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:16.310 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.310 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.310 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.310 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.310 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.310 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.310 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.310 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.310 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.310 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.310 "name": "raid_bdev1", 00:17:16.310 "uuid": "6e041cc8-2d52-4dab-8fdd-1c05c29cbd56", 00:17:16.311 "strip_size_kb": 64, 00:17:16.311 "state": "configuring", 00:17:16.311 "raid_level": "raid5f", 00:17:16.311 "superblock": true, 00:17:16.311 "num_base_bdevs": 3, 00:17:16.311 "num_base_bdevs_discovered": 1, 00:17:16.311 "num_base_bdevs_operational": 3, 00:17:16.311 "base_bdevs_list": [ 00:17:16.311 { 00:17:16.311 "name": "pt1", 00:17:16.311 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:16.311 "is_configured": true, 00:17:16.311 "data_offset": 2048, 00:17:16.311 "data_size": 63488 00:17:16.311 }, 00:17:16.311 { 00:17:16.311 "name": null, 00:17:16.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.311 "is_configured": false, 00:17:16.311 "data_offset": 0, 00:17:16.311 "data_size": 63488 00:17:16.311 }, 00:17:16.311 { 00:17:16.311 "name": null, 00:17:16.311 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:16.311 "is_configured": false, 00:17:16.311 "data_offset": 2048, 00:17:16.311 "data_size": 63488 00:17:16.311 } 00:17:16.311 ] 00:17:16.311 }' 00:17:16.311 04:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.311 04:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.879 [2024-11-27 04:34:13.171412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:16.879 [2024-11-27 04:34:13.171504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.879 [2024-11-27 04:34:13.171525] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:16.879 [2024-11-27 04:34:13.171538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.879 [2024-11-27 04:34:13.172047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.879 [2024-11-27 04:34:13.172071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:16.879 [2024-11-27 04:34:13.172180] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:16.879 [2024-11-27 04:34:13.172210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:16.879 pt2 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.879 [2024-11-27 04:34:13.183357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:16.879 [2024-11-27 04:34:13.183414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.879 [2024-11-27 04:34:13.183431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:16.879 [2024-11-27 04:34:13.183449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.879 [2024-11-27 04:34:13.183883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.879 [2024-11-27 04:34:13.183906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:16.879 [2024-11-27 04:34:13.183977] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:16.879 [2024-11-27 04:34:13.184002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:16.879 [2024-11-27 04:34:13.184184] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:16.879 [2024-11-27 04:34:13.184218] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:16.879 [2024-11-27 04:34:13.184512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:16.879 [2024-11-27 04:34:13.190678] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:16.879 [2024-11-27 04:34:13.190742] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:16.879 [2024-11-27 04:34:13.190995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.879 pt3 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.879 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.880 "name": "raid_bdev1", 00:17:16.880 "uuid": "6e041cc8-2d52-4dab-8fdd-1c05c29cbd56", 00:17:16.880 "strip_size_kb": 64, 00:17:16.880 "state": "online", 00:17:16.880 "raid_level": "raid5f", 00:17:16.880 "superblock": true, 00:17:16.880 "num_base_bdevs": 3, 00:17:16.880 "num_base_bdevs_discovered": 3, 00:17:16.880 "num_base_bdevs_operational": 3, 00:17:16.880 "base_bdevs_list": [ 00:17:16.880 { 00:17:16.880 "name": "pt1", 00:17:16.880 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:16.880 "is_configured": true, 00:17:16.880 "data_offset": 2048, 00:17:16.880 "data_size": 63488 00:17:16.880 }, 00:17:16.880 { 00:17:16.880 "name": "pt2", 00:17:16.880 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.880 "is_configured": true, 00:17:16.880 "data_offset": 2048, 00:17:16.880 "data_size": 63488 00:17:16.880 }, 00:17:16.880 { 00:17:16.880 "name": "pt3", 00:17:16.880 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:16.880 "is_configured": true, 00:17:16.880 "data_offset": 2048, 00:17:16.880 "data_size": 63488 00:17:16.880 } 00:17:16.880 ] 00:17:16.880 }' 00:17:16.880 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.880 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.138 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:17.138 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:17.138 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:17.138 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:17.138 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:17.138 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:17.138 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:17.138 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.138 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.138 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:17.138 [2024-11-27 04:34:13.614339] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:17.138 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.138 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:17.138 "name": "raid_bdev1", 00:17:17.138 "aliases": [ 00:17:17.138 "6e041cc8-2d52-4dab-8fdd-1c05c29cbd56" 00:17:17.138 ], 00:17:17.138 "product_name": "Raid Volume", 00:17:17.138 "block_size": 512, 00:17:17.138 "num_blocks": 126976, 00:17:17.138 "uuid": "6e041cc8-2d52-4dab-8fdd-1c05c29cbd56", 00:17:17.138 "assigned_rate_limits": { 00:17:17.138 "rw_ios_per_sec": 0, 00:17:17.138 "rw_mbytes_per_sec": 0, 00:17:17.138 "r_mbytes_per_sec": 0, 00:17:17.138 "w_mbytes_per_sec": 0 00:17:17.138 }, 00:17:17.138 "claimed": false, 00:17:17.138 "zoned": false, 00:17:17.138 "supported_io_types": { 00:17:17.138 "read": true, 00:17:17.138 "write": true, 00:17:17.138 "unmap": false, 00:17:17.138 "flush": false, 00:17:17.138 "reset": true, 00:17:17.138 "nvme_admin": false, 00:17:17.138 "nvme_io": false, 00:17:17.138 "nvme_io_md": false, 00:17:17.138 "write_zeroes": true, 00:17:17.138 "zcopy": false, 00:17:17.138 "get_zone_info": false, 00:17:17.138 "zone_management": false, 00:17:17.138 "zone_append": false, 00:17:17.138 "compare": false, 00:17:17.138 "compare_and_write": false, 00:17:17.138 "abort": false, 00:17:17.138 "seek_hole": false, 00:17:17.138 "seek_data": false, 00:17:17.138 "copy": false, 00:17:17.138 "nvme_iov_md": false 00:17:17.138 }, 00:17:17.138 "driver_specific": { 00:17:17.138 "raid": { 00:17:17.138 "uuid": "6e041cc8-2d52-4dab-8fdd-1c05c29cbd56", 00:17:17.138 "strip_size_kb": 64, 00:17:17.138 "state": "online", 00:17:17.138 "raid_level": "raid5f", 00:17:17.138 "superblock": true, 00:17:17.138 "num_base_bdevs": 3, 00:17:17.138 "num_base_bdevs_discovered": 3, 00:17:17.138 "num_base_bdevs_operational": 3, 00:17:17.138 "base_bdevs_list": [ 00:17:17.138 { 00:17:17.138 "name": "pt1", 00:17:17.138 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:17.138 "is_configured": true, 00:17:17.138 "data_offset": 2048, 00:17:17.138 "data_size": 63488 00:17:17.138 }, 00:17:17.138 { 00:17:17.138 "name": "pt2", 00:17:17.138 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.138 "is_configured": true, 00:17:17.138 "data_offset": 2048, 00:17:17.138 "data_size": 63488 00:17:17.138 }, 00:17:17.138 { 00:17:17.138 "name": "pt3", 00:17:17.138 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:17.138 "is_configured": true, 00:17:17.138 "data_offset": 2048, 00:17:17.138 "data_size": 63488 00:17:17.138 } 00:17:17.138 ] 00:17:17.138 } 00:17:17.138 } 00:17:17.138 }' 00:17:17.138 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:17.138 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:17.138 pt2 00:17:17.138 pt3' 00:17:17.138 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:17.436 [2024-11-27 04:34:13.901830] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6e041cc8-2d52-4dab-8fdd-1c05c29cbd56 '!=' 6e041cc8-2d52-4dab-8fdd-1c05c29cbd56 ']' 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.436 [2024-11-27 04:34:13.953581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.436 04:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.717 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.717 "name": "raid_bdev1", 00:17:17.717 "uuid": "6e041cc8-2d52-4dab-8fdd-1c05c29cbd56", 00:17:17.717 "strip_size_kb": 64, 00:17:17.717 "state": "online", 00:17:17.717 "raid_level": "raid5f", 00:17:17.717 "superblock": true, 00:17:17.717 "num_base_bdevs": 3, 00:17:17.717 "num_base_bdevs_discovered": 2, 00:17:17.717 "num_base_bdevs_operational": 2, 00:17:17.717 "base_bdevs_list": [ 00:17:17.717 { 00:17:17.717 "name": null, 00:17:17.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.717 "is_configured": false, 00:17:17.717 "data_offset": 0, 00:17:17.717 "data_size": 63488 00:17:17.717 }, 00:17:17.717 { 00:17:17.717 "name": "pt2", 00:17:17.717 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.717 "is_configured": true, 00:17:17.717 "data_offset": 2048, 00:17:17.717 "data_size": 63488 00:17:17.717 }, 00:17:17.717 { 00:17:17.717 "name": "pt3", 00:17:17.717 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:17.717 "is_configured": true, 00:17:17.717 "data_offset": 2048, 00:17:17.717 "data_size": 63488 00:17:17.717 } 00:17:17.717 ] 00:17:17.717 }' 00:17:17.717 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.717 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.975 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:17.975 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.975 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.975 [2024-11-27 04:34:14.416758] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:17.975 [2024-11-27 04:34:14.416790] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:17.975 [2024-11-27 04:34:14.416879] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:17.975 [2024-11-27 04:34:14.416941] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:17.975 [2024-11-27 04:34:14.416957] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:17.975 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.975 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.975 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.976 [2024-11-27 04:34:14.512574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:17.976 [2024-11-27 04:34:14.512699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.976 [2024-11-27 04:34:14.512724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:17.976 [2024-11-27 04:34:14.512736] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.976 [2024-11-27 04:34:14.515143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.976 [2024-11-27 04:34:14.515186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:17.976 [2024-11-27 04:34:14.515280] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:17.976 [2024-11-27 04:34:14.515332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:17.976 pt2 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.976 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.234 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.234 "name": "raid_bdev1", 00:17:18.234 "uuid": "6e041cc8-2d52-4dab-8fdd-1c05c29cbd56", 00:17:18.234 "strip_size_kb": 64, 00:17:18.234 "state": "configuring", 00:17:18.234 "raid_level": "raid5f", 00:17:18.234 "superblock": true, 00:17:18.234 "num_base_bdevs": 3, 00:17:18.234 "num_base_bdevs_discovered": 1, 00:17:18.234 "num_base_bdevs_operational": 2, 00:17:18.234 "base_bdevs_list": [ 00:17:18.234 { 00:17:18.234 "name": null, 00:17:18.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.234 "is_configured": false, 00:17:18.234 "data_offset": 2048, 00:17:18.234 "data_size": 63488 00:17:18.234 }, 00:17:18.234 { 00:17:18.234 "name": "pt2", 00:17:18.234 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:18.234 "is_configured": true, 00:17:18.234 "data_offset": 2048, 00:17:18.234 "data_size": 63488 00:17:18.234 }, 00:17:18.234 { 00:17:18.234 "name": null, 00:17:18.234 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:18.234 "is_configured": false, 00:17:18.234 "data_offset": 2048, 00:17:18.234 "data_size": 63488 00:17:18.234 } 00:17:18.234 ] 00:17:18.234 }' 00:17:18.234 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.234 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.494 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:18.494 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:18.494 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:17:18.494 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:18.494 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.494 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.494 [2024-11-27 04:34:14.931873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:18.494 [2024-11-27 04:34:14.932031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.494 [2024-11-27 04:34:14.932077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:18.494 [2024-11-27 04:34:14.932150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.494 [2024-11-27 04:34:14.932724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.494 [2024-11-27 04:34:14.932762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:18.494 [2024-11-27 04:34:14.932852] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:18.494 [2024-11-27 04:34:14.932881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:18.494 [2024-11-27 04:34:14.933011] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:18.494 [2024-11-27 04:34:14.933023] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:18.494 [2024-11-27 04:34:14.933308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:18.494 [2024-11-27 04:34:14.939313] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:18.494 [2024-11-27 04:34:14.939338] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:18.494 [2024-11-27 04:34:14.939719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.494 pt3 00:17:18.494 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.494 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:18.494 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.494 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.494 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:18.494 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.494 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:18.494 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.495 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.495 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.495 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.495 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.495 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.495 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.495 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.495 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.495 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.495 "name": "raid_bdev1", 00:17:18.495 "uuid": "6e041cc8-2d52-4dab-8fdd-1c05c29cbd56", 00:17:18.495 "strip_size_kb": 64, 00:17:18.495 "state": "online", 00:17:18.495 "raid_level": "raid5f", 00:17:18.495 "superblock": true, 00:17:18.495 "num_base_bdevs": 3, 00:17:18.495 "num_base_bdevs_discovered": 2, 00:17:18.495 "num_base_bdevs_operational": 2, 00:17:18.495 "base_bdevs_list": [ 00:17:18.495 { 00:17:18.495 "name": null, 00:17:18.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.495 "is_configured": false, 00:17:18.495 "data_offset": 2048, 00:17:18.495 "data_size": 63488 00:17:18.495 }, 00:17:18.495 { 00:17:18.495 "name": "pt2", 00:17:18.495 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:18.495 "is_configured": true, 00:17:18.495 "data_offset": 2048, 00:17:18.495 "data_size": 63488 00:17:18.495 }, 00:17:18.495 { 00:17:18.495 "name": "pt3", 00:17:18.495 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:18.495 "is_configured": true, 00:17:18.495 "data_offset": 2048, 00:17:18.495 "data_size": 63488 00:17:18.495 } 00:17:18.495 ] 00:17:18.495 }' 00:17:18.495 04:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.495 04:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.062 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:19.062 04:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.062 04:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.062 [2024-11-27 04:34:15.411470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.062 [2024-11-27 04:34:15.411566] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:19.062 [2024-11-27 04:34:15.411687] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.062 [2024-11-27 04:34:15.411781] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.062 [2024-11-27 04:34:15.411838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:19.062 04:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.062 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.062 04:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.062 04:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.062 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:19.062 04:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.062 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:19.062 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:19.062 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:17:19.062 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:17:19.062 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:17:19.062 04:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.062 04:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.062 04:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.062 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:19.062 04:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.062 04:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.062 [2024-11-27 04:34:15.487380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:19.062 [2024-11-27 04:34:15.487521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.062 [2024-11-27 04:34:15.487548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:19.062 [2024-11-27 04:34:15.487559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.062 [2024-11-27 04:34:15.490201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.062 [2024-11-27 04:34:15.490237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:19.062 [2024-11-27 04:34:15.490337] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:19.062 [2024-11-27 04:34:15.490390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:19.063 [2024-11-27 04:34:15.490544] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:19.063 [2024-11-27 04:34:15.490558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.063 [2024-11-27 04:34:15.490577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:19.063 [2024-11-27 04:34:15.490640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:19.063 pt1 00:17:19.063 04:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.063 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:17:19.063 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:17:19.063 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.063 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:19.063 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:19.063 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:19.063 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.063 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.063 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.063 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.063 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.063 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.063 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.063 04:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.063 04:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.063 04:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.063 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.063 "name": "raid_bdev1", 00:17:19.063 "uuid": "6e041cc8-2d52-4dab-8fdd-1c05c29cbd56", 00:17:19.063 "strip_size_kb": 64, 00:17:19.063 "state": "configuring", 00:17:19.063 "raid_level": "raid5f", 00:17:19.063 "superblock": true, 00:17:19.063 "num_base_bdevs": 3, 00:17:19.063 "num_base_bdevs_discovered": 1, 00:17:19.063 "num_base_bdevs_operational": 2, 00:17:19.063 "base_bdevs_list": [ 00:17:19.063 { 00:17:19.063 "name": null, 00:17:19.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.063 "is_configured": false, 00:17:19.063 "data_offset": 2048, 00:17:19.063 "data_size": 63488 00:17:19.063 }, 00:17:19.063 { 00:17:19.063 "name": "pt2", 00:17:19.063 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.063 "is_configured": true, 00:17:19.063 "data_offset": 2048, 00:17:19.063 "data_size": 63488 00:17:19.063 }, 00:17:19.063 { 00:17:19.063 "name": null, 00:17:19.063 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:19.063 "is_configured": false, 00:17:19.063 "data_offset": 2048, 00:17:19.063 "data_size": 63488 00:17:19.063 } 00:17:19.063 ] 00:17:19.063 }' 00:17:19.063 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.063 04:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.631 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:19.631 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:19.632 04:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.632 04:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.632 04:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.632 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:19.632 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:19.632 04:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.632 04:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.632 [2024-11-27 04:34:15.966565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:19.632 [2024-11-27 04:34:15.966693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.632 [2024-11-27 04:34:15.966741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:19.632 [2024-11-27 04:34:15.966790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.632 [2024-11-27 04:34:15.967387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.632 [2024-11-27 04:34:15.967470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:19.632 [2024-11-27 04:34:15.967608] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:19.632 [2024-11-27 04:34:15.967670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:19.632 [2024-11-27 04:34:15.967850] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:19.632 [2024-11-27 04:34:15.967895] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:19.632 [2024-11-27 04:34:15.968230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:19.632 [2024-11-27 04:34:15.975187] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:19.632 [2024-11-27 04:34:15.975260] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:19.632 [2024-11-27 04:34:15.975624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.632 pt3 00:17:19.632 04:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.632 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:19.632 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.632 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.632 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:19.632 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:19.632 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.632 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.632 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.632 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.632 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.632 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.632 04:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.632 04:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.632 04:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.632 04:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.632 04:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.632 "name": "raid_bdev1", 00:17:19.632 "uuid": "6e041cc8-2d52-4dab-8fdd-1c05c29cbd56", 00:17:19.632 "strip_size_kb": 64, 00:17:19.632 "state": "online", 00:17:19.632 "raid_level": "raid5f", 00:17:19.632 "superblock": true, 00:17:19.632 "num_base_bdevs": 3, 00:17:19.632 "num_base_bdevs_discovered": 2, 00:17:19.632 "num_base_bdevs_operational": 2, 00:17:19.632 "base_bdevs_list": [ 00:17:19.632 { 00:17:19.632 "name": null, 00:17:19.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.632 "is_configured": false, 00:17:19.632 "data_offset": 2048, 00:17:19.632 "data_size": 63488 00:17:19.632 }, 00:17:19.632 { 00:17:19.632 "name": "pt2", 00:17:19.632 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.632 "is_configured": true, 00:17:19.632 "data_offset": 2048, 00:17:19.632 "data_size": 63488 00:17:19.632 }, 00:17:19.632 { 00:17:19.632 "name": "pt3", 00:17:19.632 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:19.632 "is_configured": true, 00:17:19.632 "data_offset": 2048, 00:17:19.632 "data_size": 63488 00:17:19.632 } 00:17:19.632 ] 00:17:19.632 }' 00:17:19.632 04:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.632 04:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.891 04:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:19.891 04:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:19.891 04:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.891 04:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.150 04:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.150 04:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:20.150 04:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:20.150 04:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:20.150 04:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.150 04:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.150 [2024-11-27 04:34:16.523728] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:20.150 04:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.150 04:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 6e041cc8-2d52-4dab-8fdd-1c05c29cbd56 '!=' 6e041cc8-2d52-4dab-8fdd-1c05c29cbd56 ']' 00:17:20.150 04:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81459 00:17:20.150 04:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81459 ']' 00:17:20.150 04:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81459 00:17:20.150 04:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:17:20.150 04:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:20.150 04:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81459 00:17:20.150 04:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:20.151 killing process with pid 81459 00:17:20.151 04:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:20.151 04:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81459' 00:17:20.151 04:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81459 00:17:20.151 [2024-11-27 04:34:16.603226] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:20.151 [2024-11-27 04:34:16.603359] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:20.151 04:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81459 00:17:20.151 [2024-11-27 04:34:16.603431] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:20.151 [2024-11-27 04:34:16.603454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:20.410 [2024-11-27 04:34:16.945005] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:21.804 04:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:21.804 ************************************ 00:17:21.804 END TEST raid5f_superblock_test 00:17:21.804 ************************************ 00:17:21.804 00:17:21.804 real 0m8.218s 00:17:21.804 user 0m12.797s 00:17:21.804 sys 0m1.402s 00:17:21.804 04:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.804 04:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.804 04:34:18 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:21.804 04:34:18 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:17:21.804 04:34:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:21.804 04:34:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.804 04:34:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:21.804 ************************************ 00:17:21.804 START TEST raid5f_rebuild_test 00:17:21.804 ************************************ 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81912 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81912 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81912 ']' 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.804 04:34:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.804 [2024-11-27 04:34:18.377621] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:21.804 [2024-11-27 04:34:18.377842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:21.804 Zero copy mechanism will not be used. 00:17:21.804 -allocations --file-prefix=spdk_pid81912 ] 00:17:22.063 [2024-11-27 04:34:18.551552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.321 [2024-11-27 04:34:18.675891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.578 [2024-11-27 04:34:18.905560] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:22.578 [2024-11-27 04:34:18.905625] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:22.835 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:22.835 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:22.835 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:22.835 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:22.835 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.835 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.835 BaseBdev1_malloc 00:17:22.835 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.835 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:22.835 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.835 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.835 [2024-11-27 04:34:19.294839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:22.835 [2024-11-27 04:34:19.294904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.835 [2024-11-27 04:34:19.294926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:22.835 [2024-11-27 04:34:19.294938] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.835 [2024-11-27 04:34:19.297216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.835 [2024-11-27 04:34:19.297253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:22.835 BaseBdev1 00:17:22.835 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.835 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:22.835 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:22.835 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.835 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.835 BaseBdev2_malloc 00:17:22.835 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.835 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:22.835 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.835 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.835 [2024-11-27 04:34:19.352544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:22.835 [2024-11-27 04:34:19.352624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.835 [2024-11-27 04:34:19.352664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:22.835 [2024-11-27 04:34:19.352676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.835 [2024-11-27 04:34:19.355041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.835 [2024-11-27 04:34:19.355147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:22.835 BaseBdev2 00:17:22.835 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.835 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:22.835 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:22.835 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.835 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.094 BaseBdev3_malloc 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.094 [2024-11-27 04:34:19.426417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:23.094 [2024-11-27 04:34:19.426480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.094 [2024-11-27 04:34:19.426503] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:23.094 [2024-11-27 04:34:19.426515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.094 [2024-11-27 04:34:19.428875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.094 [2024-11-27 04:34:19.428996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:23.094 BaseBdev3 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.094 spare_malloc 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.094 spare_delay 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.094 [2024-11-27 04:34:19.496510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:23.094 [2024-11-27 04:34:19.496574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.094 [2024-11-27 04:34:19.496594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:23.094 [2024-11-27 04:34:19.496606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.094 [2024-11-27 04:34:19.498939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.094 [2024-11-27 04:34:19.499015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:23.094 spare 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.094 [2024-11-27 04:34:19.508560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:23.094 [2024-11-27 04:34:19.510377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:23.094 [2024-11-27 04:34:19.510482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:23.094 [2024-11-27 04:34:19.510603] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:23.094 [2024-11-27 04:34:19.510646] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:23.094 [2024-11-27 04:34:19.510944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:23.094 [2024-11-27 04:34:19.517231] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:23.094 [2024-11-27 04:34:19.517255] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:23.094 [2024-11-27 04:34:19.517463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.094 "name": "raid_bdev1", 00:17:23.094 "uuid": "314e2f78-7787-4019-8dd0-29a9c5d30ad7", 00:17:23.094 "strip_size_kb": 64, 00:17:23.094 "state": "online", 00:17:23.094 "raid_level": "raid5f", 00:17:23.094 "superblock": false, 00:17:23.094 "num_base_bdevs": 3, 00:17:23.094 "num_base_bdevs_discovered": 3, 00:17:23.094 "num_base_bdevs_operational": 3, 00:17:23.094 "base_bdevs_list": [ 00:17:23.094 { 00:17:23.094 "name": "BaseBdev1", 00:17:23.094 "uuid": "7c98fff6-4f2c-5b93-8960-da925beed3b5", 00:17:23.094 "is_configured": true, 00:17:23.094 "data_offset": 0, 00:17:23.094 "data_size": 65536 00:17:23.094 }, 00:17:23.094 { 00:17:23.094 "name": "BaseBdev2", 00:17:23.094 "uuid": "612cdcf4-1c59-59cc-a8d3-b99abfd3ffc9", 00:17:23.094 "is_configured": true, 00:17:23.094 "data_offset": 0, 00:17:23.094 "data_size": 65536 00:17:23.094 }, 00:17:23.094 { 00:17:23.094 "name": "BaseBdev3", 00:17:23.094 "uuid": "e79e154e-f882-5434-9b0b-bdee74dca644", 00:17:23.094 "is_configured": true, 00:17:23.094 "data_offset": 0, 00:17:23.094 "data_size": 65536 00:17:23.094 } 00:17:23.094 ] 00:17:23.094 }' 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.094 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.353 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:23.353 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:23.611 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.611 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.611 [2024-11-27 04:34:19.940458] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:23.611 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.611 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:17:23.611 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.611 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.611 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.611 04:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:23.611 04:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.611 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:23.611 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:23.611 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:23.611 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:23.611 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:23.611 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:23.611 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:23.611 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:23.611 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:23.611 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:23.611 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:23.611 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:23.611 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:23.611 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:23.869 [2024-11-27 04:34:20.247781] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:23.869 /dev/nbd0 00:17:23.869 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:23.869 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:23.869 04:34:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:23.869 04:34:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:23.869 04:34:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:23.869 04:34:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:23.869 04:34:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:23.869 04:34:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:23.869 04:34:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:23.869 04:34:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:23.869 04:34:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:23.869 1+0 records in 00:17:23.869 1+0 records out 00:17:23.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381217 s, 10.7 MB/s 00:17:23.869 04:34:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.869 04:34:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:23.869 04:34:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.869 04:34:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:23.869 04:34:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:23.869 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:23.869 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:23.869 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:23.869 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:23.869 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:23.869 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:17:24.435 512+0 records in 00:17:24.435 512+0 records out 00:17:24.435 67108864 bytes (67 MB, 64 MiB) copied, 0.425353 s, 158 MB/s 00:17:24.435 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:24.435 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:24.435 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:24.435 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:24.435 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:24.435 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:24.435 04:34:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:24.435 [2024-11-27 04:34:20.986998] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.435 04:34:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:24.435 04:34:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:24.435 04:34:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:24.435 04:34:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.435 04:34:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.435 04:34:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:24.435 04:34:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:24.435 04:34:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.435 04:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:24.435 04:34:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.435 04:34:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.693 [2024-11-27 04:34:21.019857] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:24.693 04:34:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.693 04:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:24.693 04:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.693 04:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.693 04:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.693 04:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.693 04:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.693 04:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.693 04:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.693 04:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.693 04:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.693 04:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.693 04:34:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.693 04:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.693 04:34:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.694 04:34:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.694 04:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.694 "name": "raid_bdev1", 00:17:24.694 "uuid": "314e2f78-7787-4019-8dd0-29a9c5d30ad7", 00:17:24.694 "strip_size_kb": 64, 00:17:24.694 "state": "online", 00:17:24.694 "raid_level": "raid5f", 00:17:24.694 "superblock": false, 00:17:24.694 "num_base_bdevs": 3, 00:17:24.694 "num_base_bdevs_discovered": 2, 00:17:24.694 "num_base_bdevs_operational": 2, 00:17:24.694 "base_bdevs_list": [ 00:17:24.694 { 00:17:24.694 "name": null, 00:17:24.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.694 "is_configured": false, 00:17:24.694 "data_offset": 0, 00:17:24.694 "data_size": 65536 00:17:24.694 }, 00:17:24.694 { 00:17:24.694 "name": "BaseBdev2", 00:17:24.694 "uuid": "612cdcf4-1c59-59cc-a8d3-b99abfd3ffc9", 00:17:24.694 "is_configured": true, 00:17:24.694 "data_offset": 0, 00:17:24.694 "data_size": 65536 00:17:24.694 }, 00:17:24.694 { 00:17:24.694 "name": "BaseBdev3", 00:17:24.694 "uuid": "e79e154e-f882-5434-9b0b-bdee74dca644", 00:17:24.694 "is_configured": true, 00:17:24.694 "data_offset": 0, 00:17:24.694 "data_size": 65536 00:17:24.694 } 00:17:24.694 ] 00:17:24.694 }' 00:17:24.694 04:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.694 04:34:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.952 04:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:24.952 04:34:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.952 04:34:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.952 [2024-11-27 04:34:21.487217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:24.952 [2024-11-27 04:34:21.508249] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:17:24.952 04:34:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.952 04:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:24.952 [2024-11-27 04:34:21.517581] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:26.328 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:26.328 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.328 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:26.328 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:26.328 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.328 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.328 04:34:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.328 04:34:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.328 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.328 04:34:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.328 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.328 "name": "raid_bdev1", 00:17:26.328 "uuid": "314e2f78-7787-4019-8dd0-29a9c5d30ad7", 00:17:26.328 "strip_size_kb": 64, 00:17:26.328 "state": "online", 00:17:26.328 "raid_level": "raid5f", 00:17:26.328 "superblock": false, 00:17:26.328 "num_base_bdevs": 3, 00:17:26.328 "num_base_bdevs_discovered": 3, 00:17:26.328 "num_base_bdevs_operational": 3, 00:17:26.328 "process": { 00:17:26.328 "type": "rebuild", 00:17:26.328 "target": "spare", 00:17:26.328 "progress": { 00:17:26.328 "blocks": 18432, 00:17:26.328 "percent": 14 00:17:26.328 } 00:17:26.328 }, 00:17:26.328 "base_bdevs_list": [ 00:17:26.328 { 00:17:26.328 "name": "spare", 00:17:26.328 "uuid": "4899b749-7843-5ce0-b29e-9e82bc4565f8", 00:17:26.328 "is_configured": true, 00:17:26.328 "data_offset": 0, 00:17:26.328 "data_size": 65536 00:17:26.328 }, 00:17:26.328 { 00:17:26.328 "name": "BaseBdev2", 00:17:26.328 "uuid": "612cdcf4-1c59-59cc-a8d3-b99abfd3ffc9", 00:17:26.329 "is_configured": true, 00:17:26.329 "data_offset": 0, 00:17:26.329 "data_size": 65536 00:17:26.329 }, 00:17:26.329 { 00:17:26.329 "name": "BaseBdev3", 00:17:26.329 "uuid": "e79e154e-f882-5434-9b0b-bdee74dca644", 00:17:26.329 "is_configured": true, 00:17:26.329 "data_offset": 0, 00:17:26.329 "data_size": 65536 00:17:26.329 } 00:17:26.329 ] 00:17:26.329 }' 00:17:26.329 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.329 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:26.329 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.329 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.329 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:26.329 04:34:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.329 04:34:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.329 [2024-11-27 04:34:22.649653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:26.329 [2024-11-27 04:34:22.729815] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:26.329 [2024-11-27 04:34:22.729911] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.329 [2024-11-27 04:34:22.729934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:26.329 [2024-11-27 04:34:22.729943] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:26.329 04:34:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.329 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:26.329 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.329 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.329 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:26.329 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.329 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:26.329 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.329 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.329 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.329 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.329 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.329 04:34:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.329 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.329 04:34:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.329 04:34:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.329 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.329 "name": "raid_bdev1", 00:17:26.329 "uuid": "314e2f78-7787-4019-8dd0-29a9c5d30ad7", 00:17:26.329 "strip_size_kb": 64, 00:17:26.329 "state": "online", 00:17:26.329 "raid_level": "raid5f", 00:17:26.329 "superblock": false, 00:17:26.329 "num_base_bdevs": 3, 00:17:26.329 "num_base_bdevs_discovered": 2, 00:17:26.329 "num_base_bdevs_operational": 2, 00:17:26.329 "base_bdevs_list": [ 00:17:26.329 { 00:17:26.329 "name": null, 00:17:26.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.329 "is_configured": false, 00:17:26.329 "data_offset": 0, 00:17:26.329 "data_size": 65536 00:17:26.329 }, 00:17:26.329 { 00:17:26.329 "name": "BaseBdev2", 00:17:26.329 "uuid": "612cdcf4-1c59-59cc-a8d3-b99abfd3ffc9", 00:17:26.329 "is_configured": true, 00:17:26.329 "data_offset": 0, 00:17:26.329 "data_size": 65536 00:17:26.329 }, 00:17:26.329 { 00:17:26.329 "name": "BaseBdev3", 00:17:26.329 "uuid": "e79e154e-f882-5434-9b0b-bdee74dca644", 00:17:26.329 "is_configured": true, 00:17:26.329 "data_offset": 0, 00:17:26.329 "data_size": 65536 00:17:26.329 } 00:17:26.329 ] 00:17:26.329 }' 00:17:26.329 04:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.329 04:34:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.895 04:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:26.895 04:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.895 04:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:26.895 04:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:26.895 04:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.895 04:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.895 04:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.895 04:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.895 04:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.895 04:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.895 04:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.895 "name": "raid_bdev1", 00:17:26.895 "uuid": "314e2f78-7787-4019-8dd0-29a9c5d30ad7", 00:17:26.895 "strip_size_kb": 64, 00:17:26.895 "state": "online", 00:17:26.895 "raid_level": "raid5f", 00:17:26.895 "superblock": false, 00:17:26.895 "num_base_bdevs": 3, 00:17:26.895 "num_base_bdevs_discovered": 2, 00:17:26.895 "num_base_bdevs_operational": 2, 00:17:26.895 "base_bdevs_list": [ 00:17:26.895 { 00:17:26.895 "name": null, 00:17:26.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.895 "is_configured": false, 00:17:26.895 "data_offset": 0, 00:17:26.895 "data_size": 65536 00:17:26.895 }, 00:17:26.895 { 00:17:26.895 "name": "BaseBdev2", 00:17:26.895 "uuid": "612cdcf4-1c59-59cc-a8d3-b99abfd3ffc9", 00:17:26.895 "is_configured": true, 00:17:26.895 "data_offset": 0, 00:17:26.895 "data_size": 65536 00:17:26.895 }, 00:17:26.895 { 00:17:26.895 "name": "BaseBdev3", 00:17:26.895 "uuid": "e79e154e-f882-5434-9b0b-bdee74dca644", 00:17:26.895 "is_configured": true, 00:17:26.895 "data_offset": 0, 00:17:26.895 "data_size": 65536 00:17:26.895 } 00:17:26.895 ] 00:17:26.895 }' 00:17:26.895 04:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.895 04:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:26.895 04:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.895 04:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:26.895 04:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:26.895 04:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.895 04:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.895 [2024-11-27 04:34:23.327929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:26.895 [2024-11-27 04:34:23.346140] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:26.895 04:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.895 04:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:26.895 [2024-11-27 04:34:23.354978] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:27.827 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.827 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.827 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.827 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.827 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.827 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.827 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.827 04:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.827 04:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.827 04:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.827 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.827 "name": "raid_bdev1", 00:17:27.827 "uuid": "314e2f78-7787-4019-8dd0-29a9c5d30ad7", 00:17:27.827 "strip_size_kb": 64, 00:17:27.827 "state": "online", 00:17:27.827 "raid_level": "raid5f", 00:17:27.827 "superblock": false, 00:17:27.827 "num_base_bdevs": 3, 00:17:27.827 "num_base_bdevs_discovered": 3, 00:17:27.827 "num_base_bdevs_operational": 3, 00:17:27.827 "process": { 00:17:27.827 "type": "rebuild", 00:17:27.827 "target": "spare", 00:17:27.827 "progress": { 00:17:27.827 "blocks": 20480, 00:17:27.827 "percent": 15 00:17:27.827 } 00:17:27.827 }, 00:17:27.827 "base_bdevs_list": [ 00:17:27.827 { 00:17:27.827 "name": "spare", 00:17:27.827 "uuid": "4899b749-7843-5ce0-b29e-9e82bc4565f8", 00:17:27.827 "is_configured": true, 00:17:27.827 "data_offset": 0, 00:17:27.827 "data_size": 65536 00:17:27.827 }, 00:17:27.827 { 00:17:27.827 "name": "BaseBdev2", 00:17:27.827 "uuid": "612cdcf4-1c59-59cc-a8d3-b99abfd3ffc9", 00:17:27.827 "is_configured": true, 00:17:27.827 "data_offset": 0, 00:17:27.827 "data_size": 65536 00:17:27.827 }, 00:17:27.827 { 00:17:27.827 "name": "BaseBdev3", 00:17:27.827 "uuid": "e79e154e-f882-5434-9b0b-bdee74dca644", 00:17:27.827 "is_configured": true, 00:17:27.827 "data_offset": 0, 00:17:27.827 "data_size": 65536 00:17:27.827 } 00:17:27.827 ] 00:17:27.827 }' 00:17:27.827 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.085 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.085 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.085 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.085 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:28.085 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:28.086 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:28.086 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=572 00:17:28.086 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:28.086 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.086 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.086 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.086 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.086 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.086 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.086 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.086 04:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.086 04:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.086 04:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.086 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.086 "name": "raid_bdev1", 00:17:28.086 "uuid": "314e2f78-7787-4019-8dd0-29a9c5d30ad7", 00:17:28.086 "strip_size_kb": 64, 00:17:28.086 "state": "online", 00:17:28.086 "raid_level": "raid5f", 00:17:28.086 "superblock": false, 00:17:28.086 "num_base_bdevs": 3, 00:17:28.086 "num_base_bdevs_discovered": 3, 00:17:28.086 "num_base_bdevs_operational": 3, 00:17:28.086 "process": { 00:17:28.086 "type": "rebuild", 00:17:28.086 "target": "spare", 00:17:28.086 "progress": { 00:17:28.086 "blocks": 22528, 00:17:28.086 "percent": 17 00:17:28.086 } 00:17:28.086 }, 00:17:28.086 "base_bdevs_list": [ 00:17:28.086 { 00:17:28.086 "name": "spare", 00:17:28.086 "uuid": "4899b749-7843-5ce0-b29e-9e82bc4565f8", 00:17:28.086 "is_configured": true, 00:17:28.086 "data_offset": 0, 00:17:28.086 "data_size": 65536 00:17:28.086 }, 00:17:28.086 { 00:17:28.086 "name": "BaseBdev2", 00:17:28.086 "uuid": "612cdcf4-1c59-59cc-a8d3-b99abfd3ffc9", 00:17:28.086 "is_configured": true, 00:17:28.086 "data_offset": 0, 00:17:28.086 "data_size": 65536 00:17:28.086 }, 00:17:28.086 { 00:17:28.086 "name": "BaseBdev3", 00:17:28.086 "uuid": "e79e154e-f882-5434-9b0b-bdee74dca644", 00:17:28.086 "is_configured": true, 00:17:28.086 "data_offset": 0, 00:17:28.086 "data_size": 65536 00:17:28.086 } 00:17:28.086 ] 00:17:28.086 }' 00:17:28.086 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.086 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.086 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.086 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.086 04:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:29.463 04:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:29.463 04:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:29.463 04:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.463 04:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:29.463 04:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:29.463 04:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.463 04:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.463 04:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.463 04:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.463 04:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.463 04:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.463 04:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.463 "name": "raid_bdev1", 00:17:29.463 "uuid": "314e2f78-7787-4019-8dd0-29a9c5d30ad7", 00:17:29.463 "strip_size_kb": 64, 00:17:29.463 "state": "online", 00:17:29.463 "raid_level": "raid5f", 00:17:29.463 "superblock": false, 00:17:29.463 "num_base_bdevs": 3, 00:17:29.463 "num_base_bdevs_discovered": 3, 00:17:29.463 "num_base_bdevs_operational": 3, 00:17:29.463 "process": { 00:17:29.463 "type": "rebuild", 00:17:29.463 "target": "spare", 00:17:29.463 "progress": { 00:17:29.463 "blocks": 45056, 00:17:29.463 "percent": 34 00:17:29.463 } 00:17:29.463 }, 00:17:29.463 "base_bdevs_list": [ 00:17:29.463 { 00:17:29.463 "name": "spare", 00:17:29.463 "uuid": "4899b749-7843-5ce0-b29e-9e82bc4565f8", 00:17:29.463 "is_configured": true, 00:17:29.463 "data_offset": 0, 00:17:29.463 "data_size": 65536 00:17:29.463 }, 00:17:29.463 { 00:17:29.463 "name": "BaseBdev2", 00:17:29.463 "uuid": "612cdcf4-1c59-59cc-a8d3-b99abfd3ffc9", 00:17:29.463 "is_configured": true, 00:17:29.463 "data_offset": 0, 00:17:29.463 "data_size": 65536 00:17:29.463 }, 00:17:29.463 { 00:17:29.463 "name": "BaseBdev3", 00:17:29.463 "uuid": "e79e154e-f882-5434-9b0b-bdee74dca644", 00:17:29.463 "is_configured": true, 00:17:29.463 "data_offset": 0, 00:17:29.463 "data_size": 65536 00:17:29.463 } 00:17:29.463 ] 00:17:29.463 }' 00:17:29.463 04:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.463 04:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:29.463 04:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.463 04:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:29.463 04:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:30.397 04:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:30.397 04:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.397 04:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.397 04:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.397 04:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.397 04:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.397 04:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.397 04:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.397 04:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.397 04:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.397 04:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.397 04:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.397 "name": "raid_bdev1", 00:17:30.397 "uuid": "314e2f78-7787-4019-8dd0-29a9c5d30ad7", 00:17:30.397 "strip_size_kb": 64, 00:17:30.397 "state": "online", 00:17:30.397 "raid_level": "raid5f", 00:17:30.397 "superblock": false, 00:17:30.397 "num_base_bdevs": 3, 00:17:30.397 "num_base_bdevs_discovered": 3, 00:17:30.397 "num_base_bdevs_operational": 3, 00:17:30.397 "process": { 00:17:30.397 "type": "rebuild", 00:17:30.397 "target": "spare", 00:17:30.397 "progress": { 00:17:30.397 "blocks": 69632, 00:17:30.397 "percent": 53 00:17:30.397 } 00:17:30.397 }, 00:17:30.397 "base_bdevs_list": [ 00:17:30.397 { 00:17:30.397 "name": "spare", 00:17:30.397 "uuid": "4899b749-7843-5ce0-b29e-9e82bc4565f8", 00:17:30.397 "is_configured": true, 00:17:30.397 "data_offset": 0, 00:17:30.397 "data_size": 65536 00:17:30.397 }, 00:17:30.397 { 00:17:30.397 "name": "BaseBdev2", 00:17:30.397 "uuid": "612cdcf4-1c59-59cc-a8d3-b99abfd3ffc9", 00:17:30.397 "is_configured": true, 00:17:30.397 "data_offset": 0, 00:17:30.397 "data_size": 65536 00:17:30.397 }, 00:17:30.397 { 00:17:30.397 "name": "BaseBdev3", 00:17:30.397 "uuid": "e79e154e-f882-5434-9b0b-bdee74dca644", 00:17:30.397 "is_configured": true, 00:17:30.397 "data_offset": 0, 00:17:30.397 "data_size": 65536 00:17:30.397 } 00:17:30.397 ] 00:17:30.397 }' 00:17:30.397 04:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.397 04:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.397 04:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.397 04:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.397 04:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:31.775 04:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:31.775 04:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:31.775 04:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.775 04:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:31.775 04:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:31.775 04:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.775 04:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.775 04:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.775 04:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.775 04:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.775 04:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.775 04:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.775 "name": "raid_bdev1", 00:17:31.775 "uuid": "314e2f78-7787-4019-8dd0-29a9c5d30ad7", 00:17:31.775 "strip_size_kb": 64, 00:17:31.775 "state": "online", 00:17:31.775 "raid_level": "raid5f", 00:17:31.775 "superblock": false, 00:17:31.775 "num_base_bdevs": 3, 00:17:31.775 "num_base_bdevs_discovered": 3, 00:17:31.775 "num_base_bdevs_operational": 3, 00:17:31.775 "process": { 00:17:31.775 "type": "rebuild", 00:17:31.775 "target": "spare", 00:17:31.775 "progress": { 00:17:31.775 "blocks": 92160, 00:17:31.775 "percent": 70 00:17:31.775 } 00:17:31.775 }, 00:17:31.775 "base_bdevs_list": [ 00:17:31.775 { 00:17:31.775 "name": "spare", 00:17:31.775 "uuid": "4899b749-7843-5ce0-b29e-9e82bc4565f8", 00:17:31.775 "is_configured": true, 00:17:31.775 "data_offset": 0, 00:17:31.775 "data_size": 65536 00:17:31.775 }, 00:17:31.775 { 00:17:31.775 "name": "BaseBdev2", 00:17:31.775 "uuid": "612cdcf4-1c59-59cc-a8d3-b99abfd3ffc9", 00:17:31.775 "is_configured": true, 00:17:31.775 "data_offset": 0, 00:17:31.775 "data_size": 65536 00:17:31.775 }, 00:17:31.775 { 00:17:31.775 "name": "BaseBdev3", 00:17:31.775 "uuid": "e79e154e-f882-5434-9b0b-bdee74dca644", 00:17:31.775 "is_configured": true, 00:17:31.775 "data_offset": 0, 00:17:31.775 "data_size": 65536 00:17:31.775 } 00:17:31.775 ] 00:17:31.775 }' 00:17:31.775 04:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.775 04:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:31.775 04:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.775 04:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:31.775 04:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:32.711 04:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:32.711 04:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:32.711 04:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.711 04:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:32.711 04:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:32.711 04:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.711 04:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.711 04:34:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.711 04:34:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.711 04:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.711 04:34:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.711 04:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.711 "name": "raid_bdev1", 00:17:32.711 "uuid": "314e2f78-7787-4019-8dd0-29a9c5d30ad7", 00:17:32.712 "strip_size_kb": 64, 00:17:32.712 "state": "online", 00:17:32.712 "raid_level": "raid5f", 00:17:32.712 "superblock": false, 00:17:32.712 "num_base_bdevs": 3, 00:17:32.712 "num_base_bdevs_discovered": 3, 00:17:32.712 "num_base_bdevs_operational": 3, 00:17:32.712 "process": { 00:17:32.712 "type": "rebuild", 00:17:32.712 "target": "spare", 00:17:32.712 "progress": { 00:17:32.712 "blocks": 114688, 00:17:32.712 "percent": 87 00:17:32.712 } 00:17:32.712 }, 00:17:32.712 "base_bdevs_list": [ 00:17:32.712 { 00:17:32.712 "name": "spare", 00:17:32.712 "uuid": "4899b749-7843-5ce0-b29e-9e82bc4565f8", 00:17:32.712 "is_configured": true, 00:17:32.712 "data_offset": 0, 00:17:32.712 "data_size": 65536 00:17:32.712 }, 00:17:32.712 { 00:17:32.712 "name": "BaseBdev2", 00:17:32.712 "uuid": "612cdcf4-1c59-59cc-a8d3-b99abfd3ffc9", 00:17:32.712 "is_configured": true, 00:17:32.712 "data_offset": 0, 00:17:32.712 "data_size": 65536 00:17:32.712 }, 00:17:32.712 { 00:17:32.712 "name": "BaseBdev3", 00:17:32.712 "uuid": "e79e154e-f882-5434-9b0b-bdee74dca644", 00:17:32.712 "is_configured": true, 00:17:32.712 "data_offset": 0, 00:17:32.712 "data_size": 65536 00:17:32.712 } 00:17:32.712 ] 00:17:32.712 }' 00:17:32.712 04:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.712 04:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:32.712 04:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.712 04:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:32.712 04:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:33.279 [2024-11-27 04:34:29.818593] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:33.279 [2024-11-27 04:34:29.818719] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:33.279 [2024-11-27 04:34:29.818773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.846 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:33.846 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:33.846 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.846 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:33.846 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:33.847 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.847 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.847 04:34:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.847 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.847 04:34:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.847 04:34:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.847 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.847 "name": "raid_bdev1", 00:17:33.847 "uuid": "314e2f78-7787-4019-8dd0-29a9c5d30ad7", 00:17:33.847 "strip_size_kb": 64, 00:17:33.847 "state": "online", 00:17:33.847 "raid_level": "raid5f", 00:17:33.847 "superblock": false, 00:17:33.847 "num_base_bdevs": 3, 00:17:33.847 "num_base_bdevs_discovered": 3, 00:17:33.847 "num_base_bdevs_operational": 3, 00:17:33.847 "base_bdevs_list": [ 00:17:33.847 { 00:17:33.847 "name": "spare", 00:17:33.847 "uuid": "4899b749-7843-5ce0-b29e-9e82bc4565f8", 00:17:33.847 "is_configured": true, 00:17:33.847 "data_offset": 0, 00:17:33.847 "data_size": 65536 00:17:33.847 }, 00:17:33.847 { 00:17:33.847 "name": "BaseBdev2", 00:17:33.847 "uuid": "612cdcf4-1c59-59cc-a8d3-b99abfd3ffc9", 00:17:33.847 "is_configured": true, 00:17:33.847 "data_offset": 0, 00:17:33.847 "data_size": 65536 00:17:33.847 }, 00:17:33.847 { 00:17:33.847 "name": "BaseBdev3", 00:17:33.847 "uuid": "e79e154e-f882-5434-9b0b-bdee74dca644", 00:17:33.847 "is_configured": true, 00:17:33.847 "data_offset": 0, 00:17:33.847 "data_size": 65536 00:17:33.847 } 00:17:33.847 ] 00:17:33.847 }' 00:17:33.847 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.847 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:33.847 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.847 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:33.847 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:33.847 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:33.847 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.847 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:33.847 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:33.847 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.847 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.847 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.847 04:34:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.847 04:34:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.847 04:34:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.847 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.847 "name": "raid_bdev1", 00:17:33.847 "uuid": "314e2f78-7787-4019-8dd0-29a9c5d30ad7", 00:17:33.847 "strip_size_kb": 64, 00:17:33.847 "state": "online", 00:17:33.847 "raid_level": "raid5f", 00:17:33.847 "superblock": false, 00:17:33.847 "num_base_bdevs": 3, 00:17:33.847 "num_base_bdevs_discovered": 3, 00:17:33.847 "num_base_bdevs_operational": 3, 00:17:33.847 "base_bdevs_list": [ 00:17:33.847 { 00:17:33.847 "name": "spare", 00:17:33.847 "uuid": "4899b749-7843-5ce0-b29e-9e82bc4565f8", 00:17:33.847 "is_configured": true, 00:17:33.847 "data_offset": 0, 00:17:33.847 "data_size": 65536 00:17:33.847 }, 00:17:33.847 { 00:17:33.847 "name": "BaseBdev2", 00:17:33.847 "uuid": "612cdcf4-1c59-59cc-a8d3-b99abfd3ffc9", 00:17:33.847 "is_configured": true, 00:17:33.847 "data_offset": 0, 00:17:33.847 "data_size": 65536 00:17:33.847 }, 00:17:33.847 { 00:17:33.847 "name": "BaseBdev3", 00:17:33.847 "uuid": "e79e154e-f882-5434-9b0b-bdee74dca644", 00:17:33.847 "is_configured": true, 00:17:33.847 "data_offset": 0, 00:17:33.847 "data_size": 65536 00:17:33.847 } 00:17:33.847 ] 00:17:33.847 }' 00:17:33.847 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.105 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:34.105 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.105 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:34.105 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:34.105 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.105 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.105 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:34.105 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.105 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:34.105 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.105 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.105 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.105 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.105 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.105 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.105 04:34:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.105 04:34:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.105 04:34:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.105 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.105 "name": "raid_bdev1", 00:17:34.105 "uuid": "314e2f78-7787-4019-8dd0-29a9c5d30ad7", 00:17:34.105 "strip_size_kb": 64, 00:17:34.106 "state": "online", 00:17:34.106 "raid_level": "raid5f", 00:17:34.106 "superblock": false, 00:17:34.106 "num_base_bdevs": 3, 00:17:34.106 "num_base_bdevs_discovered": 3, 00:17:34.106 "num_base_bdevs_operational": 3, 00:17:34.106 "base_bdevs_list": [ 00:17:34.106 { 00:17:34.106 "name": "spare", 00:17:34.106 "uuid": "4899b749-7843-5ce0-b29e-9e82bc4565f8", 00:17:34.106 "is_configured": true, 00:17:34.106 "data_offset": 0, 00:17:34.106 "data_size": 65536 00:17:34.106 }, 00:17:34.106 { 00:17:34.106 "name": "BaseBdev2", 00:17:34.106 "uuid": "612cdcf4-1c59-59cc-a8d3-b99abfd3ffc9", 00:17:34.106 "is_configured": true, 00:17:34.106 "data_offset": 0, 00:17:34.106 "data_size": 65536 00:17:34.106 }, 00:17:34.106 { 00:17:34.106 "name": "BaseBdev3", 00:17:34.106 "uuid": "e79e154e-f882-5434-9b0b-bdee74dca644", 00:17:34.106 "is_configured": true, 00:17:34.106 "data_offset": 0, 00:17:34.106 "data_size": 65536 00:17:34.106 } 00:17:34.106 ] 00:17:34.106 }' 00:17:34.106 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.106 04:34:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.672 04:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:34.672 04:34:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.672 04:34:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.672 [2024-11-27 04:34:30.997247] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:34.672 [2024-11-27 04:34:30.997288] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:34.672 [2024-11-27 04:34:30.997392] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:34.672 [2024-11-27 04:34:30.997493] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:34.672 [2024-11-27 04:34:30.997516] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:34.672 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.672 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.672 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.673 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.673 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:34.673 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.673 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:34.673 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:34.673 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:34.673 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:34.673 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:34.673 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:34.673 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:34.673 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:34.673 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:34.673 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:34.673 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:34.673 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:34.673 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:34.939 /dev/nbd0 00:17:34.939 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:34.939 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:34.939 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:34.939 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:34.939 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:34.939 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:34.939 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:34.939 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:34.939 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:34.939 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:34.939 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:34.939 1+0 records in 00:17:34.939 1+0 records out 00:17:34.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261177 s, 15.7 MB/s 00:17:34.939 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.939 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:34.939 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.939 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:34.939 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:34.940 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:34.940 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:34.940 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:35.227 /dev/nbd1 00:17:35.227 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:35.227 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:35.227 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:35.227 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:35.227 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:35.227 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:35.227 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:35.227 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:35.227 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:35.227 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:35.227 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:35.227 1+0 records in 00:17:35.227 1+0 records out 00:17:35.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428396 s, 9.6 MB/s 00:17:35.227 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:35.227 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:35.227 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:35.227 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:35.227 04:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:35.227 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:35.227 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:35.227 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:35.485 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:35.485 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:35.485 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:35.485 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:35.485 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:35.485 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:35.485 04:34:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:35.485 04:34:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:35.744 04:34:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:35.744 04:34:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:35.744 04:34:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:35.744 04:34:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:35.744 04:34:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:35.744 04:34:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:35.744 04:34:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:35.744 04:34:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:35.744 04:34:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:35.744 04:34:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:35.744 04:34:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:35.744 04:34:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:35.744 04:34:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:35.744 04:34:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:35.744 04:34:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:35.744 04:34:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:35.744 04:34:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:35.744 04:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:35.744 04:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81912 00:17:35.744 04:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81912 ']' 00:17:35.744 04:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81912 00:17:35.744 04:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:35.744 04:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:35.744 04:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81912 00:17:36.001 04:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:36.001 04:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:36.001 killing process with pid 81912 00:17:36.001 04:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81912' 00:17:36.001 04:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81912 00:17:36.001 Received shutdown signal, test time was about 60.000000 seconds 00:17:36.001 00:17:36.001 Latency(us) 00:17:36.001 [2024-11-27T04:34:32.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.002 [2024-11-27T04:34:32.589Z] =================================================================================================================== 00:17:36.002 [2024-11-27T04:34:32.589Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:36.002 [2024-11-27 04:34:32.356736] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:36.002 04:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81912 00:17:36.260 [2024-11-27 04:34:32.775848] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:37.636 04:34:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:37.636 00:17:37.636 real 0m15.694s 00:17:37.636 user 0m19.242s 00:17:37.636 sys 0m2.124s 00:17:37.636 04:34:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:37.636 04:34:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.636 ************************************ 00:17:37.636 END TEST raid5f_rebuild_test 00:17:37.636 ************************************ 00:17:37.636 04:34:34 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:17:37.636 04:34:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:37.636 04:34:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.636 04:34:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:37.636 ************************************ 00:17:37.636 START TEST raid5f_rebuild_test_sb 00:17:37.636 ************************************ 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82359 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82359 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82359 ']' 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.636 04:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.636 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:37.636 Zero copy mechanism will not be used. 00:17:37.636 [2024-11-27 04:34:34.131683] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:37.636 [2024-11-27 04:34:34.131827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82359 ] 00:17:37.895 [2024-11-27 04:34:34.308923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.895 [2024-11-27 04:34:34.428440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.153 [2024-11-27 04:34:34.628134] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:38.153 [2024-11-27 04:34:34.628211] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:38.718 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.719 BaseBdev1_malloc 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.719 [2024-11-27 04:34:35.060768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:38.719 [2024-11-27 04:34:35.060839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.719 [2024-11-27 04:34:35.060865] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:38.719 [2024-11-27 04:34:35.060881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.719 [2024-11-27 04:34:35.063324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.719 [2024-11-27 04:34:35.063372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:38.719 BaseBdev1 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.719 BaseBdev2_malloc 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.719 [2024-11-27 04:34:35.122744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:38.719 [2024-11-27 04:34:35.122820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.719 [2024-11-27 04:34:35.122847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:38.719 [2024-11-27 04:34:35.122859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.719 [2024-11-27 04:34:35.125334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.719 [2024-11-27 04:34:35.125379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:38.719 BaseBdev2 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.719 BaseBdev3_malloc 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.719 [2024-11-27 04:34:35.204460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:38.719 [2024-11-27 04:34:35.204569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.719 [2024-11-27 04:34:35.204610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:38.719 [2024-11-27 04:34:35.204630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.719 [2024-11-27 04:34:35.207557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.719 [2024-11-27 04:34:35.207619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:38.719 BaseBdev3 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.719 spare_malloc 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.719 spare_delay 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.719 [2024-11-27 04:34:35.267750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:38.719 [2024-11-27 04:34:35.267837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.719 [2024-11-27 04:34:35.267859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:38.719 [2024-11-27 04:34:35.267872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.719 [2024-11-27 04:34:35.270346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.719 [2024-11-27 04:34:35.270394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:38.719 spare 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.719 [2024-11-27 04:34:35.275828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:38.719 [2024-11-27 04:34:35.277876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:38.719 [2024-11-27 04:34:35.277954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:38.719 [2024-11-27 04:34:35.278170] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:38.719 [2024-11-27 04:34:35.278190] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:38.719 [2024-11-27 04:34:35.278498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:38.719 [2024-11-27 04:34:35.284802] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:38.719 [2024-11-27 04:34:35.284837] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:38.719 [2024-11-27 04:34:35.285122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.719 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.720 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:38.720 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.720 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.720 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.720 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.720 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.720 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.720 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.720 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.978 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.978 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.978 "name": "raid_bdev1", 00:17:38.978 "uuid": "8bcbafc1-c449-4fe2-8fa7-ce39d2d82165", 00:17:38.978 "strip_size_kb": 64, 00:17:38.978 "state": "online", 00:17:38.978 "raid_level": "raid5f", 00:17:38.978 "superblock": true, 00:17:38.978 "num_base_bdevs": 3, 00:17:38.978 "num_base_bdevs_discovered": 3, 00:17:38.978 "num_base_bdevs_operational": 3, 00:17:38.978 "base_bdevs_list": [ 00:17:38.978 { 00:17:38.978 "name": "BaseBdev1", 00:17:38.978 "uuid": "8b3025b4-1688-5013-9b81-0b21cc897e23", 00:17:38.978 "is_configured": true, 00:17:38.978 "data_offset": 2048, 00:17:38.978 "data_size": 63488 00:17:38.978 }, 00:17:38.978 { 00:17:38.978 "name": "BaseBdev2", 00:17:38.978 "uuid": "71ae5fb9-11ae-50e0-88bb-16f50a0ed279", 00:17:38.978 "is_configured": true, 00:17:38.978 "data_offset": 2048, 00:17:38.978 "data_size": 63488 00:17:38.978 }, 00:17:38.978 { 00:17:38.978 "name": "BaseBdev3", 00:17:38.978 "uuid": "3b8c4308-ef19-5c8e-b796-268d2bc137dc", 00:17:38.978 "is_configured": true, 00:17:38.978 "data_offset": 2048, 00:17:38.978 "data_size": 63488 00:17:38.978 } 00:17:38.978 ] 00:17:38.978 }' 00:17:38.978 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.978 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.237 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:39.237 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:39.237 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.237 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.237 [2024-11-27 04:34:35.747969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:39.237 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.237 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:17:39.237 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.237 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.237 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:39.237 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.237 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.496 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:39.496 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:39.496 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:39.496 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:39.496 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:39.496 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:39.496 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:39.496 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:39.496 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:39.496 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:39.496 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:39.496 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:39.496 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:39.496 04:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:39.496 [2024-11-27 04:34:36.047323] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:39.496 /dev/nbd0 00:17:39.755 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:39.755 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:39.755 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:39.755 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:39.755 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:39.755 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:39.755 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:39.755 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:39.756 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:39.756 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:39.756 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:39.756 1+0 records in 00:17:39.756 1+0 records out 00:17:39.756 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397882 s, 10.3 MB/s 00:17:39.756 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:39.756 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:39.756 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:39.756 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:39.756 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:39.756 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:39.756 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:39.756 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:39.756 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:39.756 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:39.756 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:17:40.015 496+0 records in 00:17:40.015 496+0 records out 00:17:40.015 65011712 bytes (65 MB, 62 MiB) copied, 0.436036 s, 149 MB/s 00:17:40.015 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:40.015 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:40.015 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:40.015 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:40.015 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:40.015 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:40.015 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:40.274 [2024-11-27 04:34:36.797058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.274 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:40.274 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:40.274 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:40.274 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:40.274 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:40.274 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:40.274 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:40.274 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:40.274 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:40.274 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.274 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.274 [2024-11-27 04:34:36.829832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:40.274 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.274 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:40.274 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.274 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.274 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:40.274 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.274 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:40.275 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.275 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.275 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.275 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.275 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.275 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.275 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.275 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.275 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.534 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.534 "name": "raid_bdev1", 00:17:40.534 "uuid": "8bcbafc1-c449-4fe2-8fa7-ce39d2d82165", 00:17:40.534 "strip_size_kb": 64, 00:17:40.534 "state": "online", 00:17:40.534 "raid_level": "raid5f", 00:17:40.534 "superblock": true, 00:17:40.534 "num_base_bdevs": 3, 00:17:40.534 "num_base_bdevs_discovered": 2, 00:17:40.534 "num_base_bdevs_operational": 2, 00:17:40.534 "base_bdevs_list": [ 00:17:40.534 { 00:17:40.534 "name": null, 00:17:40.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.534 "is_configured": false, 00:17:40.534 "data_offset": 0, 00:17:40.534 "data_size": 63488 00:17:40.534 }, 00:17:40.534 { 00:17:40.534 "name": "BaseBdev2", 00:17:40.534 "uuid": "71ae5fb9-11ae-50e0-88bb-16f50a0ed279", 00:17:40.534 "is_configured": true, 00:17:40.534 "data_offset": 2048, 00:17:40.534 "data_size": 63488 00:17:40.534 }, 00:17:40.534 { 00:17:40.534 "name": "BaseBdev3", 00:17:40.534 "uuid": "3b8c4308-ef19-5c8e-b796-268d2bc137dc", 00:17:40.534 "is_configured": true, 00:17:40.534 "data_offset": 2048, 00:17:40.534 "data_size": 63488 00:17:40.534 } 00:17:40.534 ] 00:17:40.534 }' 00:17:40.534 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.534 04:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.793 04:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:40.793 04:34:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.793 04:34:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.793 [2024-11-27 04:34:37.293112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:40.793 [2024-11-27 04:34:37.314371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:17:40.793 04:34:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.793 04:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:40.793 [2024-11-27 04:34:37.324579] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.170 "name": "raid_bdev1", 00:17:42.170 "uuid": "8bcbafc1-c449-4fe2-8fa7-ce39d2d82165", 00:17:42.170 "strip_size_kb": 64, 00:17:42.170 "state": "online", 00:17:42.170 "raid_level": "raid5f", 00:17:42.170 "superblock": true, 00:17:42.170 "num_base_bdevs": 3, 00:17:42.170 "num_base_bdevs_discovered": 3, 00:17:42.170 "num_base_bdevs_operational": 3, 00:17:42.170 "process": { 00:17:42.170 "type": "rebuild", 00:17:42.170 "target": "spare", 00:17:42.170 "progress": { 00:17:42.170 "blocks": 20480, 00:17:42.170 "percent": 16 00:17:42.170 } 00:17:42.170 }, 00:17:42.170 "base_bdevs_list": [ 00:17:42.170 { 00:17:42.170 "name": "spare", 00:17:42.170 "uuid": "081a7067-4bf0-53b0-9f6f-426ad84f2277", 00:17:42.170 "is_configured": true, 00:17:42.170 "data_offset": 2048, 00:17:42.170 "data_size": 63488 00:17:42.170 }, 00:17:42.170 { 00:17:42.170 "name": "BaseBdev2", 00:17:42.170 "uuid": "71ae5fb9-11ae-50e0-88bb-16f50a0ed279", 00:17:42.170 "is_configured": true, 00:17:42.170 "data_offset": 2048, 00:17:42.170 "data_size": 63488 00:17:42.170 }, 00:17:42.170 { 00:17:42.170 "name": "BaseBdev3", 00:17:42.170 "uuid": "3b8c4308-ef19-5c8e-b796-268d2bc137dc", 00:17:42.170 "is_configured": true, 00:17:42.170 "data_offset": 2048, 00:17:42.170 "data_size": 63488 00:17:42.170 } 00:17:42.170 ] 00:17:42.170 }' 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.170 [2024-11-27 04:34:38.476450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:42.170 [2024-11-27 04:34:38.536576] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:42.170 [2024-11-27 04:34:38.536680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.170 [2024-11-27 04:34:38.536703] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:42.170 [2024-11-27 04:34:38.536712] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.170 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.170 "name": "raid_bdev1", 00:17:42.170 "uuid": "8bcbafc1-c449-4fe2-8fa7-ce39d2d82165", 00:17:42.170 "strip_size_kb": 64, 00:17:42.170 "state": "online", 00:17:42.170 "raid_level": "raid5f", 00:17:42.170 "superblock": true, 00:17:42.170 "num_base_bdevs": 3, 00:17:42.170 "num_base_bdevs_discovered": 2, 00:17:42.170 "num_base_bdevs_operational": 2, 00:17:42.170 "base_bdevs_list": [ 00:17:42.170 { 00:17:42.170 "name": null, 00:17:42.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.171 "is_configured": false, 00:17:42.171 "data_offset": 0, 00:17:42.171 "data_size": 63488 00:17:42.171 }, 00:17:42.171 { 00:17:42.171 "name": "BaseBdev2", 00:17:42.171 "uuid": "71ae5fb9-11ae-50e0-88bb-16f50a0ed279", 00:17:42.171 "is_configured": true, 00:17:42.171 "data_offset": 2048, 00:17:42.171 "data_size": 63488 00:17:42.171 }, 00:17:42.171 { 00:17:42.171 "name": "BaseBdev3", 00:17:42.171 "uuid": "3b8c4308-ef19-5c8e-b796-268d2bc137dc", 00:17:42.171 "is_configured": true, 00:17:42.171 "data_offset": 2048, 00:17:42.171 "data_size": 63488 00:17:42.171 } 00:17:42.171 ] 00:17:42.171 }' 00:17:42.171 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.171 04:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.738 04:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:42.738 04:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.738 04:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:42.738 04:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:42.738 04:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.738 04:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.738 04:34:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.738 04:34:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.738 04:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.738 04:34:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.738 04:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.738 "name": "raid_bdev1", 00:17:42.738 "uuid": "8bcbafc1-c449-4fe2-8fa7-ce39d2d82165", 00:17:42.738 "strip_size_kb": 64, 00:17:42.738 "state": "online", 00:17:42.738 "raid_level": "raid5f", 00:17:42.738 "superblock": true, 00:17:42.738 "num_base_bdevs": 3, 00:17:42.738 "num_base_bdevs_discovered": 2, 00:17:42.738 "num_base_bdevs_operational": 2, 00:17:42.738 "base_bdevs_list": [ 00:17:42.738 { 00:17:42.738 "name": null, 00:17:42.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.738 "is_configured": false, 00:17:42.738 "data_offset": 0, 00:17:42.738 "data_size": 63488 00:17:42.738 }, 00:17:42.738 { 00:17:42.738 "name": "BaseBdev2", 00:17:42.738 "uuid": "71ae5fb9-11ae-50e0-88bb-16f50a0ed279", 00:17:42.738 "is_configured": true, 00:17:42.738 "data_offset": 2048, 00:17:42.738 "data_size": 63488 00:17:42.738 }, 00:17:42.738 { 00:17:42.738 "name": "BaseBdev3", 00:17:42.738 "uuid": "3b8c4308-ef19-5c8e-b796-268d2bc137dc", 00:17:42.738 "is_configured": true, 00:17:42.738 "data_offset": 2048, 00:17:42.738 "data_size": 63488 00:17:42.738 } 00:17:42.738 ] 00:17:42.738 }' 00:17:42.738 04:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.738 04:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:42.738 04:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.738 04:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:42.738 04:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:42.738 04:34:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.738 04:34:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.738 [2024-11-27 04:34:39.207742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:42.738 [2024-11-27 04:34:39.226200] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:17:42.738 04:34:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.738 04:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:42.738 [2024-11-27 04:34:39.234873] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:43.672 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.672 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.672 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.672 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.672 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.672 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.672 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.672 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.672 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.672 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.932 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.932 "name": "raid_bdev1", 00:17:43.932 "uuid": "8bcbafc1-c449-4fe2-8fa7-ce39d2d82165", 00:17:43.932 "strip_size_kb": 64, 00:17:43.932 "state": "online", 00:17:43.932 "raid_level": "raid5f", 00:17:43.932 "superblock": true, 00:17:43.932 "num_base_bdevs": 3, 00:17:43.932 "num_base_bdevs_discovered": 3, 00:17:43.932 "num_base_bdevs_operational": 3, 00:17:43.932 "process": { 00:17:43.932 "type": "rebuild", 00:17:43.932 "target": "spare", 00:17:43.932 "progress": { 00:17:43.932 "blocks": 18432, 00:17:43.932 "percent": 14 00:17:43.932 } 00:17:43.932 }, 00:17:43.932 "base_bdevs_list": [ 00:17:43.932 { 00:17:43.932 "name": "spare", 00:17:43.932 "uuid": "081a7067-4bf0-53b0-9f6f-426ad84f2277", 00:17:43.932 "is_configured": true, 00:17:43.932 "data_offset": 2048, 00:17:43.932 "data_size": 63488 00:17:43.932 }, 00:17:43.932 { 00:17:43.932 "name": "BaseBdev2", 00:17:43.932 "uuid": "71ae5fb9-11ae-50e0-88bb-16f50a0ed279", 00:17:43.932 "is_configured": true, 00:17:43.932 "data_offset": 2048, 00:17:43.932 "data_size": 63488 00:17:43.932 }, 00:17:43.932 { 00:17:43.932 "name": "BaseBdev3", 00:17:43.932 "uuid": "3b8c4308-ef19-5c8e-b796-268d2bc137dc", 00:17:43.932 "is_configured": true, 00:17:43.932 "data_offset": 2048, 00:17:43.932 "data_size": 63488 00:17:43.932 } 00:17:43.932 ] 00:17:43.932 }' 00:17:43.932 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.933 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.933 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.933 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.933 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:43.933 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:43.933 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:43.933 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:43.933 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:43.933 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=588 00:17:43.933 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:43.933 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.933 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.933 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.933 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.933 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.933 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.933 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.933 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.933 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.933 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.933 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.933 "name": "raid_bdev1", 00:17:43.933 "uuid": "8bcbafc1-c449-4fe2-8fa7-ce39d2d82165", 00:17:43.933 "strip_size_kb": 64, 00:17:43.933 "state": "online", 00:17:43.933 "raid_level": "raid5f", 00:17:43.933 "superblock": true, 00:17:43.933 "num_base_bdevs": 3, 00:17:43.933 "num_base_bdevs_discovered": 3, 00:17:43.933 "num_base_bdevs_operational": 3, 00:17:43.933 "process": { 00:17:43.933 "type": "rebuild", 00:17:43.933 "target": "spare", 00:17:43.933 "progress": { 00:17:43.933 "blocks": 22528, 00:17:43.933 "percent": 17 00:17:43.933 } 00:17:43.933 }, 00:17:43.933 "base_bdevs_list": [ 00:17:43.933 { 00:17:43.933 "name": "spare", 00:17:43.933 "uuid": "081a7067-4bf0-53b0-9f6f-426ad84f2277", 00:17:43.933 "is_configured": true, 00:17:43.933 "data_offset": 2048, 00:17:43.933 "data_size": 63488 00:17:43.933 }, 00:17:43.933 { 00:17:43.933 "name": "BaseBdev2", 00:17:43.933 "uuid": "71ae5fb9-11ae-50e0-88bb-16f50a0ed279", 00:17:43.933 "is_configured": true, 00:17:43.933 "data_offset": 2048, 00:17:43.933 "data_size": 63488 00:17:43.933 }, 00:17:43.933 { 00:17:43.933 "name": "BaseBdev3", 00:17:43.933 "uuid": "3b8c4308-ef19-5c8e-b796-268d2bc137dc", 00:17:43.933 "is_configured": true, 00:17:43.933 "data_offset": 2048, 00:17:43.933 "data_size": 63488 00:17:43.933 } 00:17:43.933 ] 00:17:43.933 }' 00:17:43.933 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.933 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.933 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.194 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.194 04:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:45.131 04:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:45.131 04:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:45.131 04:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.131 04:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:45.131 04:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:45.131 04:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.131 04:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.131 04:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.131 04:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.131 04:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.131 04:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.131 04:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.131 "name": "raid_bdev1", 00:17:45.131 "uuid": "8bcbafc1-c449-4fe2-8fa7-ce39d2d82165", 00:17:45.131 "strip_size_kb": 64, 00:17:45.131 "state": "online", 00:17:45.131 "raid_level": "raid5f", 00:17:45.131 "superblock": true, 00:17:45.131 "num_base_bdevs": 3, 00:17:45.131 "num_base_bdevs_discovered": 3, 00:17:45.131 "num_base_bdevs_operational": 3, 00:17:45.131 "process": { 00:17:45.131 "type": "rebuild", 00:17:45.131 "target": "spare", 00:17:45.131 "progress": { 00:17:45.131 "blocks": 45056, 00:17:45.131 "percent": 35 00:17:45.131 } 00:17:45.131 }, 00:17:45.131 "base_bdevs_list": [ 00:17:45.131 { 00:17:45.131 "name": "spare", 00:17:45.131 "uuid": "081a7067-4bf0-53b0-9f6f-426ad84f2277", 00:17:45.131 "is_configured": true, 00:17:45.131 "data_offset": 2048, 00:17:45.131 "data_size": 63488 00:17:45.131 }, 00:17:45.131 { 00:17:45.131 "name": "BaseBdev2", 00:17:45.131 "uuid": "71ae5fb9-11ae-50e0-88bb-16f50a0ed279", 00:17:45.131 "is_configured": true, 00:17:45.131 "data_offset": 2048, 00:17:45.131 "data_size": 63488 00:17:45.131 }, 00:17:45.131 { 00:17:45.131 "name": "BaseBdev3", 00:17:45.131 "uuid": "3b8c4308-ef19-5c8e-b796-268d2bc137dc", 00:17:45.131 "is_configured": true, 00:17:45.131 "data_offset": 2048, 00:17:45.131 "data_size": 63488 00:17:45.131 } 00:17:45.131 ] 00:17:45.131 }' 00:17:45.131 04:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.131 04:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:45.131 04:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.131 04:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:45.131 04:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:46.512 04:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:46.512 04:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.512 04:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.512 04:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.512 04:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.512 04:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.512 04:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.512 04:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.512 04:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.512 04:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.512 04:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.512 04:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.512 "name": "raid_bdev1", 00:17:46.512 "uuid": "8bcbafc1-c449-4fe2-8fa7-ce39d2d82165", 00:17:46.512 "strip_size_kb": 64, 00:17:46.512 "state": "online", 00:17:46.512 "raid_level": "raid5f", 00:17:46.512 "superblock": true, 00:17:46.512 "num_base_bdevs": 3, 00:17:46.512 "num_base_bdevs_discovered": 3, 00:17:46.512 "num_base_bdevs_operational": 3, 00:17:46.512 "process": { 00:17:46.512 "type": "rebuild", 00:17:46.512 "target": "spare", 00:17:46.512 "progress": { 00:17:46.512 "blocks": 69632, 00:17:46.512 "percent": 54 00:17:46.512 } 00:17:46.512 }, 00:17:46.512 "base_bdevs_list": [ 00:17:46.512 { 00:17:46.512 "name": "spare", 00:17:46.512 "uuid": "081a7067-4bf0-53b0-9f6f-426ad84f2277", 00:17:46.512 "is_configured": true, 00:17:46.512 "data_offset": 2048, 00:17:46.512 "data_size": 63488 00:17:46.512 }, 00:17:46.512 { 00:17:46.512 "name": "BaseBdev2", 00:17:46.512 "uuid": "71ae5fb9-11ae-50e0-88bb-16f50a0ed279", 00:17:46.512 "is_configured": true, 00:17:46.512 "data_offset": 2048, 00:17:46.512 "data_size": 63488 00:17:46.512 }, 00:17:46.512 { 00:17:46.512 "name": "BaseBdev3", 00:17:46.512 "uuid": "3b8c4308-ef19-5c8e-b796-268d2bc137dc", 00:17:46.512 "is_configured": true, 00:17:46.512 "data_offset": 2048, 00:17:46.512 "data_size": 63488 00:17:46.512 } 00:17:46.512 ] 00:17:46.512 }' 00:17:46.512 04:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.512 04:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:46.512 04:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.512 04:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:46.512 04:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:47.478 04:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:47.478 04:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:47.478 04:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.478 04:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:47.478 04:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:47.478 04:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.478 04:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.478 04:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.478 04:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.478 04:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.478 04:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.478 04:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.478 "name": "raid_bdev1", 00:17:47.478 "uuid": "8bcbafc1-c449-4fe2-8fa7-ce39d2d82165", 00:17:47.478 "strip_size_kb": 64, 00:17:47.478 "state": "online", 00:17:47.478 "raid_level": "raid5f", 00:17:47.478 "superblock": true, 00:17:47.478 "num_base_bdevs": 3, 00:17:47.478 "num_base_bdevs_discovered": 3, 00:17:47.478 "num_base_bdevs_operational": 3, 00:17:47.478 "process": { 00:17:47.478 "type": "rebuild", 00:17:47.478 "target": "spare", 00:17:47.478 "progress": { 00:17:47.478 "blocks": 92160, 00:17:47.478 "percent": 72 00:17:47.478 } 00:17:47.478 }, 00:17:47.478 "base_bdevs_list": [ 00:17:47.478 { 00:17:47.478 "name": "spare", 00:17:47.478 "uuid": "081a7067-4bf0-53b0-9f6f-426ad84f2277", 00:17:47.478 "is_configured": true, 00:17:47.478 "data_offset": 2048, 00:17:47.478 "data_size": 63488 00:17:47.478 }, 00:17:47.478 { 00:17:47.478 "name": "BaseBdev2", 00:17:47.478 "uuid": "71ae5fb9-11ae-50e0-88bb-16f50a0ed279", 00:17:47.478 "is_configured": true, 00:17:47.478 "data_offset": 2048, 00:17:47.478 "data_size": 63488 00:17:47.478 }, 00:17:47.478 { 00:17:47.478 "name": "BaseBdev3", 00:17:47.478 "uuid": "3b8c4308-ef19-5c8e-b796-268d2bc137dc", 00:17:47.478 "is_configured": true, 00:17:47.478 "data_offset": 2048, 00:17:47.478 "data_size": 63488 00:17:47.478 } 00:17:47.478 ] 00:17:47.478 }' 00:17:47.478 04:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.478 04:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:47.478 04:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.478 04:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:47.478 04:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:48.872 04:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:48.872 04:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:48.872 04:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.872 04:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:48.872 04:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:48.872 04:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.872 04:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.872 04:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.872 04:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.872 04:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.872 04:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.872 04:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.872 "name": "raid_bdev1", 00:17:48.872 "uuid": "8bcbafc1-c449-4fe2-8fa7-ce39d2d82165", 00:17:48.872 "strip_size_kb": 64, 00:17:48.872 "state": "online", 00:17:48.872 "raid_level": "raid5f", 00:17:48.872 "superblock": true, 00:17:48.872 "num_base_bdevs": 3, 00:17:48.872 "num_base_bdevs_discovered": 3, 00:17:48.872 "num_base_bdevs_operational": 3, 00:17:48.872 "process": { 00:17:48.872 "type": "rebuild", 00:17:48.872 "target": "spare", 00:17:48.872 "progress": { 00:17:48.872 "blocks": 116736, 00:17:48.872 "percent": 91 00:17:48.872 } 00:17:48.872 }, 00:17:48.872 "base_bdevs_list": [ 00:17:48.872 { 00:17:48.872 "name": "spare", 00:17:48.872 "uuid": "081a7067-4bf0-53b0-9f6f-426ad84f2277", 00:17:48.872 "is_configured": true, 00:17:48.872 "data_offset": 2048, 00:17:48.872 "data_size": 63488 00:17:48.872 }, 00:17:48.872 { 00:17:48.872 "name": "BaseBdev2", 00:17:48.872 "uuid": "71ae5fb9-11ae-50e0-88bb-16f50a0ed279", 00:17:48.872 "is_configured": true, 00:17:48.872 "data_offset": 2048, 00:17:48.872 "data_size": 63488 00:17:48.872 }, 00:17:48.872 { 00:17:48.872 "name": "BaseBdev3", 00:17:48.872 "uuid": "3b8c4308-ef19-5c8e-b796-268d2bc137dc", 00:17:48.872 "is_configured": true, 00:17:48.872 "data_offset": 2048, 00:17:48.872 "data_size": 63488 00:17:48.872 } 00:17:48.872 ] 00:17:48.872 }' 00:17:48.872 04:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.872 04:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:48.872 04:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.872 04:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:48.872 04:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:49.131 [2024-11-27 04:34:45.496589] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:49.131 [2024-11-27 04:34:45.496697] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:49.131 [2024-11-27 04:34:45.496851] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.697 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:49.697 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:49.697 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.697 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:49.697 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:49.697 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.697 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.697 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.697 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.697 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.697 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.697 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.698 "name": "raid_bdev1", 00:17:49.698 "uuid": "8bcbafc1-c449-4fe2-8fa7-ce39d2d82165", 00:17:49.698 "strip_size_kb": 64, 00:17:49.698 "state": "online", 00:17:49.698 "raid_level": "raid5f", 00:17:49.698 "superblock": true, 00:17:49.698 "num_base_bdevs": 3, 00:17:49.698 "num_base_bdevs_discovered": 3, 00:17:49.698 "num_base_bdevs_operational": 3, 00:17:49.698 "base_bdevs_list": [ 00:17:49.698 { 00:17:49.698 "name": "spare", 00:17:49.698 "uuid": "081a7067-4bf0-53b0-9f6f-426ad84f2277", 00:17:49.698 "is_configured": true, 00:17:49.698 "data_offset": 2048, 00:17:49.698 "data_size": 63488 00:17:49.698 }, 00:17:49.698 { 00:17:49.698 "name": "BaseBdev2", 00:17:49.698 "uuid": "71ae5fb9-11ae-50e0-88bb-16f50a0ed279", 00:17:49.698 "is_configured": true, 00:17:49.698 "data_offset": 2048, 00:17:49.698 "data_size": 63488 00:17:49.698 }, 00:17:49.698 { 00:17:49.698 "name": "BaseBdev3", 00:17:49.698 "uuid": "3b8c4308-ef19-5c8e-b796-268d2bc137dc", 00:17:49.698 "is_configured": true, 00:17:49.698 "data_offset": 2048, 00:17:49.698 "data_size": 63488 00:17:49.698 } 00:17:49.698 ] 00:17:49.698 }' 00:17:49.698 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.956 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:49.956 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.956 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:49.956 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:49.956 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:49.956 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.956 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:49.956 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:49.956 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.956 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.956 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.957 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.957 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.957 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.957 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.957 "name": "raid_bdev1", 00:17:49.957 "uuid": "8bcbafc1-c449-4fe2-8fa7-ce39d2d82165", 00:17:49.957 "strip_size_kb": 64, 00:17:49.957 "state": "online", 00:17:49.957 "raid_level": "raid5f", 00:17:49.957 "superblock": true, 00:17:49.957 "num_base_bdevs": 3, 00:17:49.957 "num_base_bdevs_discovered": 3, 00:17:49.957 "num_base_bdevs_operational": 3, 00:17:49.957 "base_bdevs_list": [ 00:17:49.957 { 00:17:49.957 "name": "spare", 00:17:49.957 "uuid": "081a7067-4bf0-53b0-9f6f-426ad84f2277", 00:17:49.957 "is_configured": true, 00:17:49.957 "data_offset": 2048, 00:17:49.957 "data_size": 63488 00:17:49.957 }, 00:17:49.957 { 00:17:49.957 "name": "BaseBdev2", 00:17:49.957 "uuid": "71ae5fb9-11ae-50e0-88bb-16f50a0ed279", 00:17:49.957 "is_configured": true, 00:17:49.957 "data_offset": 2048, 00:17:49.957 "data_size": 63488 00:17:49.957 }, 00:17:49.957 { 00:17:49.957 "name": "BaseBdev3", 00:17:49.957 "uuid": "3b8c4308-ef19-5c8e-b796-268d2bc137dc", 00:17:49.957 "is_configured": true, 00:17:49.957 "data_offset": 2048, 00:17:49.957 "data_size": 63488 00:17:49.957 } 00:17:49.957 ] 00:17:49.957 }' 00:17:49.957 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.957 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:49.957 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.957 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:49.957 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:49.957 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.957 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.957 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.957 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.957 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:49.957 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.957 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.957 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.957 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.957 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.957 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.957 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.957 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.957 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.957 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.957 "name": "raid_bdev1", 00:17:49.957 "uuid": "8bcbafc1-c449-4fe2-8fa7-ce39d2d82165", 00:17:49.957 "strip_size_kb": 64, 00:17:49.957 "state": "online", 00:17:49.957 "raid_level": "raid5f", 00:17:49.957 "superblock": true, 00:17:49.957 "num_base_bdevs": 3, 00:17:49.957 "num_base_bdevs_discovered": 3, 00:17:49.957 "num_base_bdevs_operational": 3, 00:17:49.957 "base_bdevs_list": [ 00:17:49.957 { 00:17:49.957 "name": "spare", 00:17:49.957 "uuid": "081a7067-4bf0-53b0-9f6f-426ad84f2277", 00:17:49.957 "is_configured": true, 00:17:49.957 "data_offset": 2048, 00:17:49.957 "data_size": 63488 00:17:49.957 }, 00:17:49.957 { 00:17:49.957 "name": "BaseBdev2", 00:17:49.957 "uuid": "71ae5fb9-11ae-50e0-88bb-16f50a0ed279", 00:17:49.957 "is_configured": true, 00:17:49.957 "data_offset": 2048, 00:17:49.957 "data_size": 63488 00:17:49.957 }, 00:17:49.957 { 00:17:49.957 "name": "BaseBdev3", 00:17:49.957 "uuid": "3b8c4308-ef19-5c8e-b796-268d2bc137dc", 00:17:49.957 "is_configured": true, 00:17:49.957 "data_offset": 2048, 00:17:49.957 "data_size": 63488 00:17:49.957 } 00:17:49.957 ] 00:17:49.957 }' 00:17:49.957 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.957 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.521 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:50.521 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.521 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.521 [2024-11-27 04:34:46.869430] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.521 [2024-11-27 04:34:46.869475] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.522 [2024-11-27 04:34:46.869588] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.522 [2024-11-27 04:34:46.869686] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:50.522 [2024-11-27 04:34:46.869708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:50.522 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.522 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.522 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.522 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.522 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:50.522 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.522 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:50.522 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:50.522 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:50.522 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:50.522 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:50.522 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:50.522 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:50.522 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:50.522 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:50.522 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:50.522 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:50.522 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:50.522 04:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:50.780 /dev/nbd0 00:17:50.780 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:50.780 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:50.780 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:50.780 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:50.780 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:50.780 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:50.780 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:50.780 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:50.780 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:50.780 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:50.780 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:50.780 1+0 records in 00:17:50.780 1+0 records out 00:17:50.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025054 s, 16.3 MB/s 00:17:50.780 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:50.780 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:50.780 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:50.780 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:50.780 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:50.780 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:50.780 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:50.780 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:51.038 /dev/nbd1 00:17:51.038 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:51.038 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:51.038 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:51.038 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:51.038 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:51.038 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:51.038 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:51.038 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:51.038 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:51.038 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:51.038 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:51.038 1+0 records in 00:17:51.038 1+0 records out 00:17:51.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409002 s, 10.0 MB/s 00:17:51.038 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:51.038 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:51.038 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:51.038 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:51.038 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:51.038 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:51.038 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:51.038 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:51.295 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:51.295 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:51.295 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:51.295 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:51.295 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:51.295 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:51.295 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:51.554 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:51.554 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:51.554 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:51.554 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:51.554 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:51.554 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:51.554 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:51.554 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:51.554 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:51.554 04:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:51.813 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:51.813 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:51.813 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:51.813 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:51.813 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:51.813 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:51.813 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:51.813 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:51.813 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:51.813 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:51.813 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.813 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.813 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.813 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:51.813 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.813 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.813 [2024-11-27 04:34:48.222350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:51.813 [2024-11-27 04:34:48.222436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.813 [2024-11-27 04:34:48.222465] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:51.813 [2024-11-27 04:34:48.222480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.813 [2024-11-27 04:34:48.225336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.813 [2024-11-27 04:34:48.225383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:51.813 [2024-11-27 04:34:48.225510] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:51.813 [2024-11-27 04:34:48.225589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:51.813 [2024-11-27 04:34:48.225754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:51.813 [2024-11-27 04:34:48.225885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:51.813 spare 00:17:51.813 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.813 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:51.813 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.813 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.813 [2024-11-27 04:34:48.325835] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:51.813 [2024-11-27 04:34:48.325910] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:51.813 [2024-11-27 04:34:48.326330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:17:51.813 [2024-11-27 04:34:48.333773] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:51.813 [2024-11-27 04:34:48.333806] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:51.813 [2024-11-27 04:34:48.334106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.813 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.813 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:51.813 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.813 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.813 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.813 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.814 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:51.814 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.814 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.814 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.814 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.814 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.814 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.814 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.814 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.814 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.814 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.814 "name": "raid_bdev1", 00:17:51.814 "uuid": "8bcbafc1-c449-4fe2-8fa7-ce39d2d82165", 00:17:51.814 "strip_size_kb": 64, 00:17:51.814 "state": "online", 00:17:51.814 "raid_level": "raid5f", 00:17:51.814 "superblock": true, 00:17:51.814 "num_base_bdevs": 3, 00:17:51.814 "num_base_bdevs_discovered": 3, 00:17:51.814 "num_base_bdevs_operational": 3, 00:17:51.814 "base_bdevs_list": [ 00:17:51.814 { 00:17:51.814 "name": "spare", 00:17:51.814 "uuid": "081a7067-4bf0-53b0-9f6f-426ad84f2277", 00:17:51.814 "is_configured": true, 00:17:51.814 "data_offset": 2048, 00:17:51.814 "data_size": 63488 00:17:51.814 }, 00:17:51.814 { 00:17:51.814 "name": "BaseBdev2", 00:17:51.814 "uuid": "71ae5fb9-11ae-50e0-88bb-16f50a0ed279", 00:17:51.814 "is_configured": true, 00:17:51.814 "data_offset": 2048, 00:17:51.814 "data_size": 63488 00:17:51.814 }, 00:17:51.814 { 00:17:51.814 "name": "BaseBdev3", 00:17:51.814 "uuid": "3b8c4308-ef19-5c8e-b796-268d2bc137dc", 00:17:51.814 "is_configured": true, 00:17:51.814 "data_offset": 2048, 00:17:51.814 "data_size": 63488 00:17:51.814 } 00:17:51.814 ] 00:17:51.814 }' 00:17:51.814 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.814 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.403 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:52.403 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.403 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:52.403 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:52.403 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.403 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.403 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.403 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.403 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.403 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.403 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.403 "name": "raid_bdev1", 00:17:52.403 "uuid": "8bcbafc1-c449-4fe2-8fa7-ce39d2d82165", 00:17:52.403 "strip_size_kb": 64, 00:17:52.403 "state": "online", 00:17:52.403 "raid_level": "raid5f", 00:17:52.403 "superblock": true, 00:17:52.403 "num_base_bdevs": 3, 00:17:52.403 "num_base_bdevs_discovered": 3, 00:17:52.403 "num_base_bdevs_operational": 3, 00:17:52.403 "base_bdevs_list": [ 00:17:52.403 { 00:17:52.403 "name": "spare", 00:17:52.403 "uuid": "081a7067-4bf0-53b0-9f6f-426ad84f2277", 00:17:52.403 "is_configured": true, 00:17:52.403 "data_offset": 2048, 00:17:52.403 "data_size": 63488 00:17:52.403 }, 00:17:52.403 { 00:17:52.403 "name": "BaseBdev2", 00:17:52.403 "uuid": "71ae5fb9-11ae-50e0-88bb-16f50a0ed279", 00:17:52.403 "is_configured": true, 00:17:52.403 "data_offset": 2048, 00:17:52.403 "data_size": 63488 00:17:52.403 }, 00:17:52.403 { 00:17:52.403 "name": "BaseBdev3", 00:17:52.404 "uuid": "3b8c4308-ef19-5c8e-b796-268d2bc137dc", 00:17:52.404 "is_configured": true, 00:17:52.404 "data_offset": 2048, 00:17:52.404 "data_size": 63488 00:17:52.404 } 00:17:52.404 ] 00:17:52.404 }' 00:17:52.404 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.404 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:52.404 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.404 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:52.404 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.404 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.404 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.404 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:52.404 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.662 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:52.662 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:52.662 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.662 04:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.662 [2024-11-27 04:34:49.005028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:52.662 04:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.662 04:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:52.662 04:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.662 04:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.662 04:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.662 04:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.662 04:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:52.662 04:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.662 04:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.662 04:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.662 04:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.662 04:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.663 04:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.663 04:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.663 04:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.663 04:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.663 04:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.663 "name": "raid_bdev1", 00:17:52.663 "uuid": "8bcbafc1-c449-4fe2-8fa7-ce39d2d82165", 00:17:52.663 "strip_size_kb": 64, 00:17:52.663 "state": "online", 00:17:52.663 "raid_level": "raid5f", 00:17:52.663 "superblock": true, 00:17:52.663 "num_base_bdevs": 3, 00:17:52.663 "num_base_bdevs_discovered": 2, 00:17:52.663 "num_base_bdevs_operational": 2, 00:17:52.663 "base_bdevs_list": [ 00:17:52.663 { 00:17:52.663 "name": null, 00:17:52.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.663 "is_configured": false, 00:17:52.663 "data_offset": 0, 00:17:52.663 "data_size": 63488 00:17:52.663 }, 00:17:52.663 { 00:17:52.663 "name": "BaseBdev2", 00:17:52.663 "uuid": "71ae5fb9-11ae-50e0-88bb-16f50a0ed279", 00:17:52.663 "is_configured": true, 00:17:52.663 "data_offset": 2048, 00:17:52.663 "data_size": 63488 00:17:52.663 }, 00:17:52.663 { 00:17:52.663 "name": "BaseBdev3", 00:17:52.663 "uuid": "3b8c4308-ef19-5c8e-b796-268d2bc137dc", 00:17:52.663 "is_configured": true, 00:17:52.663 "data_offset": 2048, 00:17:52.663 "data_size": 63488 00:17:52.663 } 00:17:52.663 ] 00:17:52.663 }' 00:17:52.663 04:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.663 04:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.922 04:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:52.922 04:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.922 04:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.922 [2024-11-27 04:34:49.456300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:52.922 [2024-11-27 04:34:49.456539] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:52.922 [2024-11-27 04:34:49.456567] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:52.922 [2024-11-27 04:34:49.456613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:52.922 [2024-11-27 04:34:49.475230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:17:52.922 04:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.922 04:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:52.922 [2024-11-27 04:34:49.484630] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.298 "name": "raid_bdev1", 00:17:54.298 "uuid": "8bcbafc1-c449-4fe2-8fa7-ce39d2d82165", 00:17:54.298 "strip_size_kb": 64, 00:17:54.298 "state": "online", 00:17:54.298 "raid_level": "raid5f", 00:17:54.298 "superblock": true, 00:17:54.298 "num_base_bdevs": 3, 00:17:54.298 "num_base_bdevs_discovered": 3, 00:17:54.298 "num_base_bdevs_operational": 3, 00:17:54.298 "process": { 00:17:54.298 "type": "rebuild", 00:17:54.298 "target": "spare", 00:17:54.298 "progress": { 00:17:54.298 "blocks": 18432, 00:17:54.298 "percent": 14 00:17:54.298 } 00:17:54.298 }, 00:17:54.298 "base_bdevs_list": [ 00:17:54.298 { 00:17:54.298 "name": "spare", 00:17:54.298 "uuid": "081a7067-4bf0-53b0-9f6f-426ad84f2277", 00:17:54.298 "is_configured": true, 00:17:54.298 "data_offset": 2048, 00:17:54.298 "data_size": 63488 00:17:54.298 }, 00:17:54.298 { 00:17:54.298 "name": "BaseBdev2", 00:17:54.298 "uuid": "71ae5fb9-11ae-50e0-88bb-16f50a0ed279", 00:17:54.298 "is_configured": true, 00:17:54.298 "data_offset": 2048, 00:17:54.298 "data_size": 63488 00:17:54.298 }, 00:17:54.298 { 00:17:54.298 "name": "BaseBdev3", 00:17:54.298 "uuid": "3b8c4308-ef19-5c8e-b796-268d2bc137dc", 00:17:54.298 "is_configured": true, 00:17:54.298 "data_offset": 2048, 00:17:54.298 "data_size": 63488 00:17:54.298 } 00:17:54.298 ] 00:17:54.298 }' 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.298 [2024-11-27 04:34:50.620560] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:54.298 [2024-11-27 04:34:50.696611] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:54.298 [2024-11-27 04:34:50.696717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.298 [2024-11-27 04:34:50.696736] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:54.298 [2024-11-27 04:34:50.696747] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.298 "name": "raid_bdev1", 00:17:54.298 "uuid": "8bcbafc1-c449-4fe2-8fa7-ce39d2d82165", 00:17:54.298 "strip_size_kb": 64, 00:17:54.298 "state": "online", 00:17:54.298 "raid_level": "raid5f", 00:17:54.298 "superblock": true, 00:17:54.298 "num_base_bdevs": 3, 00:17:54.298 "num_base_bdevs_discovered": 2, 00:17:54.298 "num_base_bdevs_operational": 2, 00:17:54.298 "base_bdevs_list": [ 00:17:54.298 { 00:17:54.298 "name": null, 00:17:54.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.298 "is_configured": false, 00:17:54.298 "data_offset": 0, 00:17:54.298 "data_size": 63488 00:17:54.298 }, 00:17:54.298 { 00:17:54.298 "name": "BaseBdev2", 00:17:54.298 "uuid": "71ae5fb9-11ae-50e0-88bb-16f50a0ed279", 00:17:54.298 "is_configured": true, 00:17:54.298 "data_offset": 2048, 00:17:54.298 "data_size": 63488 00:17:54.298 }, 00:17:54.298 { 00:17:54.298 "name": "BaseBdev3", 00:17:54.298 "uuid": "3b8c4308-ef19-5c8e-b796-268d2bc137dc", 00:17:54.298 "is_configured": true, 00:17:54.298 "data_offset": 2048, 00:17:54.298 "data_size": 63488 00:17:54.298 } 00:17:54.298 ] 00:17:54.298 }' 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.298 04:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.865 04:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:54.865 04:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.865 04:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.865 [2024-11-27 04:34:51.210449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:54.865 [2024-11-27 04:34:51.210534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.865 [2024-11-27 04:34:51.210560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:17:54.865 [2024-11-27 04:34:51.210578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.865 [2024-11-27 04:34:51.211205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.865 [2024-11-27 04:34:51.211242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:54.865 [2024-11-27 04:34:51.211359] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:54.865 [2024-11-27 04:34:51.211388] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:54.865 [2024-11-27 04:34:51.211401] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:54.865 [2024-11-27 04:34:51.211430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:54.865 [2024-11-27 04:34:51.231352] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:17:54.865 spare 00:17:54.865 04:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.865 04:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:54.865 [2024-11-27 04:34:51.240157] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:55.799 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:55.799 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.799 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:55.799 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:55.799 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.799 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.799 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.799 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.799 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.799 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.800 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.800 "name": "raid_bdev1", 00:17:55.800 "uuid": "8bcbafc1-c449-4fe2-8fa7-ce39d2d82165", 00:17:55.800 "strip_size_kb": 64, 00:17:55.800 "state": "online", 00:17:55.800 "raid_level": "raid5f", 00:17:55.800 "superblock": true, 00:17:55.800 "num_base_bdevs": 3, 00:17:55.800 "num_base_bdevs_discovered": 3, 00:17:55.800 "num_base_bdevs_operational": 3, 00:17:55.800 "process": { 00:17:55.800 "type": "rebuild", 00:17:55.800 "target": "spare", 00:17:55.800 "progress": { 00:17:55.800 "blocks": 20480, 00:17:55.800 "percent": 16 00:17:55.800 } 00:17:55.800 }, 00:17:55.800 "base_bdevs_list": [ 00:17:55.800 { 00:17:55.800 "name": "spare", 00:17:55.800 "uuid": "081a7067-4bf0-53b0-9f6f-426ad84f2277", 00:17:55.800 "is_configured": true, 00:17:55.800 "data_offset": 2048, 00:17:55.800 "data_size": 63488 00:17:55.800 }, 00:17:55.800 { 00:17:55.800 "name": "BaseBdev2", 00:17:55.800 "uuid": "71ae5fb9-11ae-50e0-88bb-16f50a0ed279", 00:17:55.800 "is_configured": true, 00:17:55.800 "data_offset": 2048, 00:17:55.800 "data_size": 63488 00:17:55.800 }, 00:17:55.800 { 00:17:55.800 "name": "BaseBdev3", 00:17:55.800 "uuid": "3b8c4308-ef19-5c8e-b796-268d2bc137dc", 00:17:55.800 "is_configured": true, 00:17:55.800 "data_offset": 2048, 00:17:55.800 "data_size": 63488 00:17:55.800 } 00:17:55.800 ] 00:17:55.800 }' 00:17:55.800 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.800 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:55.800 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.057 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.057 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:56.057 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.057 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.057 [2024-11-27 04:34:52.396096] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:56.057 [2024-11-27 04:34:52.452279] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:56.057 [2024-11-27 04:34:52.452389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.057 [2024-11-27 04:34:52.452414] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:56.057 [2024-11-27 04:34:52.452424] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:56.057 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.057 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:56.057 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.057 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.057 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.057 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.057 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:56.057 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.057 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.057 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.057 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.057 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.057 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.057 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.057 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.057 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.057 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.057 "name": "raid_bdev1", 00:17:56.057 "uuid": "8bcbafc1-c449-4fe2-8fa7-ce39d2d82165", 00:17:56.058 "strip_size_kb": 64, 00:17:56.058 "state": "online", 00:17:56.058 "raid_level": "raid5f", 00:17:56.058 "superblock": true, 00:17:56.058 "num_base_bdevs": 3, 00:17:56.058 "num_base_bdevs_discovered": 2, 00:17:56.058 "num_base_bdevs_operational": 2, 00:17:56.058 "base_bdevs_list": [ 00:17:56.058 { 00:17:56.058 "name": null, 00:17:56.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.058 "is_configured": false, 00:17:56.058 "data_offset": 0, 00:17:56.058 "data_size": 63488 00:17:56.058 }, 00:17:56.058 { 00:17:56.058 "name": "BaseBdev2", 00:17:56.058 "uuid": "71ae5fb9-11ae-50e0-88bb-16f50a0ed279", 00:17:56.058 "is_configured": true, 00:17:56.058 "data_offset": 2048, 00:17:56.058 "data_size": 63488 00:17:56.058 }, 00:17:56.058 { 00:17:56.058 "name": "BaseBdev3", 00:17:56.058 "uuid": "3b8c4308-ef19-5c8e-b796-268d2bc137dc", 00:17:56.058 "is_configured": true, 00:17:56.058 "data_offset": 2048, 00:17:56.058 "data_size": 63488 00:17:56.058 } 00:17:56.058 ] 00:17:56.058 }' 00:17:56.058 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.058 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.624 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:56.624 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.624 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:56.624 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:56.624 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.624 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.624 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.624 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.624 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.624 04:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.624 04:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.624 "name": "raid_bdev1", 00:17:56.624 "uuid": "8bcbafc1-c449-4fe2-8fa7-ce39d2d82165", 00:17:56.624 "strip_size_kb": 64, 00:17:56.624 "state": "online", 00:17:56.624 "raid_level": "raid5f", 00:17:56.624 "superblock": true, 00:17:56.624 "num_base_bdevs": 3, 00:17:56.624 "num_base_bdevs_discovered": 2, 00:17:56.624 "num_base_bdevs_operational": 2, 00:17:56.624 "base_bdevs_list": [ 00:17:56.624 { 00:17:56.624 "name": null, 00:17:56.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.624 "is_configured": false, 00:17:56.624 "data_offset": 0, 00:17:56.624 "data_size": 63488 00:17:56.624 }, 00:17:56.624 { 00:17:56.624 "name": "BaseBdev2", 00:17:56.624 "uuid": "71ae5fb9-11ae-50e0-88bb-16f50a0ed279", 00:17:56.624 "is_configured": true, 00:17:56.624 "data_offset": 2048, 00:17:56.624 "data_size": 63488 00:17:56.624 }, 00:17:56.624 { 00:17:56.624 "name": "BaseBdev3", 00:17:56.624 "uuid": "3b8c4308-ef19-5c8e-b796-268d2bc137dc", 00:17:56.624 "is_configured": true, 00:17:56.624 "data_offset": 2048, 00:17:56.624 "data_size": 63488 00:17:56.624 } 00:17:56.624 ] 00:17:56.624 }' 00:17:56.624 04:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.624 04:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:56.624 04:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.624 04:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:56.624 04:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:56.624 04:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.624 04:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.624 04:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.624 04:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:56.624 04:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.624 04:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.624 [2024-11-27 04:34:53.123266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:56.624 [2024-11-27 04:34:53.123341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.624 [2024-11-27 04:34:53.123371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:56.624 [2024-11-27 04:34:53.123382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.624 [2024-11-27 04:34:53.123980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.624 [2024-11-27 04:34:53.124014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:56.624 [2024-11-27 04:34:53.124134] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:56.624 [2024-11-27 04:34:53.124164] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:56.624 [2024-11-27 04:34:53.124191] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:56.624 [2024-11-27 04:34:53.124204] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:56.624 BaseBdev1 00:17:56.624 04:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.624 04:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:57.555 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:57.555 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.555 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.555 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:57.555 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.555 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:57.555 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.555 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.555 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.555 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.555 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.555 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.555 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.812 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.812 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.812 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.812 "name": "raid_bdev1", 00:17:57.812 "uuid": "8bcbafc1-c449-4fe2-8fa7-ce39d2d82165", 00:17:57.812 "strip_size_kb": 64, 00:17:57.812 "state": "online", 00:17:57.812 "raid_level": "raid5f", 00:17:57.812 "superblock": true, 00:17:57.812 "num_base_bdevs": 3, 00:17:57.812 "num_base_bdevs_discovered": 2, 00:17:57.812 "num_base_bdevs_operational": 2, 00:17:57.812 "base_bdevs_list": [ 00:17:57.812 { 00:17:57.812 "name": null, 00:17:57.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.812 "is_configured": false, 00:17:57.812 "data_offset": 0, 00:17:57.812 "data_size": 63488 00:17:57.812 }, 00:17:57.812 { 00:17:57.812 "name": "BaseBdev2", 00:17:57.812 "uuid": "71ae5fb9-11ae-50e0-88bb-16f50a0ed279", 00:17:57.812 "is_configured": true, 00:17:57.812 "data_offset": 2048, 00:17:57.812 "data_size": 63488 00:17:57.812 }, 00:17:57.812 { 00:17:57.812 "name": "BaseBdev3", 00:17:57.812 "uuid": "3b8c4308-ef19-5c8e-b796-268d2bc137dc", 00:17:57.812 "is_configured": true, 00:17:57.812 "data_offset": 2048, 00:17:57.812 "data_size": 63488 00:17:57.812 } 00:17:57.812 ] 00:17:57.812 }' 00:17:57.812 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.812 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.070 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:58.070 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.070 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:58.070 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:58.070 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.070 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.070 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.070 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.070 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.070 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.328 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.329 "name": "raid_bdev1", 00:17:58.329 "uuid": "8bcbafc1-c449-4fe2-8fa7-ce39d2d82165", 00:17:58.329 "strip_size_kb": 64, 00:17:58.329 "state": "online", 00:17:58.329 "raid_level": "raid5f", 00:17:58.329 "superblock": true, 00:17:58.329 "num_base_bdevs": 3, 00:17:58.329 "num_base_bdevs_discovered": 2, 00:17:58.329 "num_base_bdevs_operational": 2, 00:17:58.329 "base_bdevs_list": [ 00:17:58.329 { 00:17:58.329 "name": null, 00:17:58.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.329 "is_configured": false, 00:17:58.329 "data_offset": 0, 00:17:58.329 "data_size": 63488 00:17:58.329 }, 00:17:58.329 { 00:17:58.329 "name": "BaseBdev2", 00:17:58.329 "uuid": "71ae5fb9-11ae-50e0-88bb-16f50a0ed279", 00:17:58.329 "is_configured": true, 00:17:58.329 "data_offset": 2048, 00:17:58.329 "data_size": 63488 00:17:58.329 }, 00:17:58.329 { 00:17:58.329 "name": "BaseBdev3", 00:17:58.329 "uuid": "3b8c4308-ef19-5c8e-b796-268d2bc137dc", 00:17:58.329 "is_configured": true, 00:17:58.329 "data_offset": 2048, 00:17:58.329 "data_size": 63488 00:17:58.329 } 00:17:58.329 ] 00:17:58.329 }' 00:17:58.329 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.329 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:58.329 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.329 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:58.329 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:58.329 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:58.329 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:58.329 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:58.329 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.329 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:58.329 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.329 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:58.329 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.329 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.329 [2024-11-27 04:34:54.780598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:58.329 [2024-11-27 04:34:54.780796] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:58.329 [2024-11-27 04:34:54.780823] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:58.329 request: 00:17:58.329 { 00:17:58.329 "base_bdev": "BaseBdev1", 00:17:58.329 "raid_bdev": "raid_bdev1", 00:17:58.329 "method": "bdev_raid_add_base_bdev", 00:17:58.329 "req_id": 1 00:17:58.329 } 00:17:58.329 Got JSON-RPC error response 00:17:58.329 response: 00:17:58.329 { 00:17:58.329 "code": -22, 00:17:58.329 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:58.329 } 00:17:58.329 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:58.329 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:58.329 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:58.329 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:58.329 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:58.329 04:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:59.265 04:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:59.265 04:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.265 04:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.265 04:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:59.265 04:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:59.265 04:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:59.265 04:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.265 04:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.265 04:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.265 04:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.265 04:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.265 04:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.265 04:34:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.265 04:34:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.265 04:34:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.265 04:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.265 "name": "raid_bdev1", 00:17:59.265 "uuid": "8bcbafc1-c449-4fe2-8fa7-ce39d2d82165", 00:17:59.265 "strip_size_kb": 64, 00:17:59.265 "state": "online", 00:17:59.265 "raid_level": "raid5f", 00:17:59.265 "superblock": true, 00:17:59.266 "num_base_bdevs": 3, 00:17:59.266 "num_base_bdevs_discovered": 2, 00:17:59.266 "num_base_bdevs_operational": 2, 00:17:59.266 "base_bdevs_list": [ 00:17:59.266 { 00:17:59.266 "name": null, 00:17:59.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.266 "is_configured": false, 00:17:59.266 "data_offset": 0, 00:17:59.266 "data_size": 63488 00:17:59.266 }, 00:17:59.266 { 00:17:59.266 "name": "BaseBdev2", 00:17:59.266 "uuid": "71ae5fb9-11ae-50e0-88bb-16f50a0ed279", 00:17:59.266 "is_configured": true, 00:17:59.266 "data_offset": 2048, 00:17:59.266 "data_size": 63488 00:17:59.266 }, 00:17:59.266 { 00:17:59.266 "name": "BaseBdev3", 00:17:59.266 "uuid": "3b8c4308-ef19-5c8e-b796-268d2bc137dc", 00:17:59.266 "is_configured": true, 00:17:59.266 "data_offset": 2048, 00:17:59.266 "data_size": 63488 00:17:59.266 } 00:17:59.266 ] 00:17:59.266 }' 00:17:59.266 04:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.266 04:34:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.834 04:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:59.834 04:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.834 04:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:59.834 04:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:59.834 04:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.834 04:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.834 04:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.834 04:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.834 04:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.834 04:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.834 04:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.834 "name": "raid_bdev1", 00:17:59.834 "uuid": "8bcbafc1-c449-4fe2-8fa7-ce39d2d82165", 00:17:59.834 "strip_size_kb": 64, 00:17:59.834 "state": "online", 00:17:59.834 "raid_level": "raid5f", 00:17:59.834 "superblock": true, 00:17:59.834 "num_base_bdevs": 3, 00:17:59.834 "num_base_bdevs_discovered": 2, 00:17:59.834 "num_base_bdevs_operational": 2, 00:17:59.834 "base_bdevs_list": [ 00:17:59.834 { 00:17:59.834 "name": null, 00:17:59.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.834 "is_configured": false, 00:17:59.834 "data_offset": 0, 00:17:59.834 "data_size": 63488 00:17:59.834 }, 00:17:59.834 { 00:17:59.834 "name": "BaseBdev2", 00:17:59.834 "uuid": "71ae5fb9-11ae-50e0-88bb-16f50a0ed279", 00:17:59.834 "is_configured": true, 00:17:59.834 "data_offset": 2048, 00:17:59.834 "data_size": 63488 00:17:59.834 }, 00:17:59.834 { 00:17:59.834 "name": "BaseBdev3", 00:17:59.834 "uuid": "3b8c4308-ef19-5c8e-b796-268d2bc137dc", 00:17:59.834 "is_configured": true, 00:17:59.834 "data_offset": 2048, 00:17:59.834 "data_size": 63488 00:17:59.834 } 00:17:59.834 ] 00:17:59.834 }' 00:17:59.834 04:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.834 04:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:59.834 04:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.834 04:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:59.834 04:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82359 00:17:59.834 04:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82359 ']' 00:17:59.834 04:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82359 00:17:59.834 04:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:59.834 04:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.834 04:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82359 00:18:00.092 04:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:00.092 04:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:00.092 killing process with pid 82359 00:18:00.092 04:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82359' 00:18:00.092 04:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82359 00:18:00.092 Received shutdown signal, test time was about 60.000000 seconds 00:18:00.092 00:18:00.092 Latency(us) 00:18:00.092 [2024-11-27T04:34:56.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.092 [2024-11-27T04:34:56.679Z] =================================================================================================================== 00:18:00.092 [2024-11-27T04:34:56.679Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:00.092 [2024-11-27 04:34:56.428705] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:00.092 04:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82359 00:18:00.092 [2024-11-27 04:34:56.428884] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.092 [2024-11-27 04:34:56.428960] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:00.092 [2024-11-27 04:34:56.428981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:00.350 [2024-11-27 04:34:56.873055] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:01.722 04:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:01.722 00:18:01.722 real 0m24.053s 00:18:01.722 user 0m31.020s 00:18:01.722 sys 0m2.769s 00:18:01.722 04:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:01.722 04:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.722 ************************************ 00:18:01.722 END TEST raid5f_rebuild_test_sb 00:18:01.722 ************************************ 00:18:01.722 04:34:58 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:18:01.722 04:34:58 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:18:01.722 04:34:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:01.722 04:34:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:01.722 04:34:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:01.722 ************************************ 00:18:01.722 START TEST raid5f_state_function_test 00:18:01.722 ************************************ 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83123 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:01.722 Process raid pid: 83123 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83123' 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83123 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83123 ']' 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.722 04:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.722 [2024-11-27 04:34:58.249382] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:01.722 [2024-11-27 04:34:58.249503] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.981 [2024-11-27 04:34:58.426122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.981 [2024-11-27 04:34:58.550731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.241 [2024-11-27 04:34:58.782076] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:02.241 [2024-11-27 04:34:58.782124] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:02.809 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.809 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:18:02.809 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:02.809 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.809 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.809 [2024-11-27 04:34:59.184603] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:02.809 [2024-11-27 04:34:59.184665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:02.809 [2024-11-27 04:34:59.184677] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:02.809 [2024-11-27 04:34:59.184688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:02.809 [2024-11-27 04:34:59.184695] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:02.809 [2024-11-27 04:34:59.184705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:02.809 [2024-11-27 04:34:59.184713] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:02.809 [2024-11-27 04:34:59.184723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:02.809 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.809 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:02.809 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:02.809 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.809 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.809 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.809 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:02.809 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.809 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.809 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.809 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.809 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.809 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.809 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.809 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.809 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.809 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.809 "name": "Existed_Raid", 00:18:02.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.809 "strip_size_kb": 64, 00:18:02.809 "state": "configuring", 00:18:02.809 "raid_level": "raid5f", 00:18:02.809 "superblock": false, 00:18:02.809 "num_base_bdevs": 4, 00:18:02.809 "num_base_bdevs_discovered": 0, 00:18:02.809 "num_base_bdevs_operational": 4, 00:18:02.809 "base_bdevs_list": [ 00:18:02.809 { 00:18:02.809 "name": "BaseBdev1", 00:18:02.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.809 "is_configured": false, 00:18:02.809 "data_offset": 0, 00:18:02.809 "data_size": 0 00:18:02.809 }, 00:18:02.809 { 00:18:02.809 "name": "BaseBdev2", 00:18:02.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.809 "is_configured": false, 00:18:02.809 "data_offset": 0, 00:18:02.809 "data_size": 0 00:18:02.809 }, 00:18:02.809 { 00:18:02.809 "name": "BaseBdev3", 00:18:02.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.809 "is_configured": false, 00:18:02.809 "data_offset": 0, 00:18:02.809 "data_size": 0 00:18:02.809 }, 00:18:02.809 { 00:18:02.809 "name": "BaseBdev4", 00:18:02.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.809 "is_configured": false, 00:18:02.809 "data_offset": 0, 00:18:02.809 "data_size": 0 00:18:02.809 } 00:18:02.809 ] 00:18:02.809 }' 00:18:02.809 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.809 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.377 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:03.377 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.377 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.377 [2024-11-27 04:34:59.687724] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:03.377 [2024-11-27 04:34:59.687777] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:03.377 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.377 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:03.377 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.377 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.377 [2024-11-27 04:34:59.699734] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:03.377 [2024-11-27 04:34:59.699793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:03.377 [2024-11-27 04:34:59.699803] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:03.377 [2024-11-27 04:34:59.699816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:03.377 [2024-11-27 04:34:59.699823] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:03.377 [2024-11-27 04:34:59.699833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:03.377 [2024-11-27 04:34:59.699840] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:03.377 [2024-11-27 04:34:59.699850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.378 [2024-11-27 04:34:59.747627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:03.378 BaseBdev1 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.378 [ 00:18:03.378 { 00:18:03.378 "name": "BaseBdev1", 00:18:03.378 "aliases": [ 00:18:03.378 "a2415635-6320-489b-8ada-71393ac446e1" 00:18:03.378 ], 00:18:03.378 "product_name": "Malloc disk", 00:18:03.378 "block_size": 512, 00:18:03.378 "num_blocks": 65536, 00:18:03.378 "uuid": "a2415635-6320-489b-8ada-71393ac446e1", 00:18:03.378 "assigned_rate_limits": { 00:18:03.378 "rw_ios_per_sec": 0, 00:18:03.378 "rw_mbytes_per_sec": 0, 00:18:03.378 "r_mbytes_per_sec": 0, 00:18:03.378 "w_mbytes_per_sec": 0 00:18:03.378 }, 00:18:03.378 "claimed": true, 00:18:03.378 "claim_type": "exclusive_write", 00:18:03.378 "zoned": false, 00:18:03.378 "supported_io_types": { 00:18:03.378 "read": true, 00:18:03.378 "write": true, 00:18:03.378 "unmap": true, 00:18:03.378 "flush": true, 00:18:03.378 "reset": true, 00:18:03.378 "nvme_admin": false, 00:18:03.378 "nvme_io": false, 00:18:03.378 "nvme_io_md": false, 00:18:03.378 "write_zeroes": true, 00:18:03.378 "zcopy": true, 00:18:03.378 "get_zone_info": false, 00:18:03.378 "zone_management": false, 00:18:03.378 "zone_append": false, 00:18:03.378 "compare": false, 00:18:03.378 "compare_and_write": false, 00:18:03.378 "abort": true, 00:18:03.378 "seek_hole": false, 00:18:03.378 "seek_data": false, 00:18:03.378 "copy": true, 00:18:03.378 "nvme_iov_md": false 00:18:03.378 }, 00:18:03.378 "memory_domains": [ 00:18:03.378 { 00:18:03.378 "dma_device_id": "system", 00:18:03.378 "dma_device_type": 1 00:18:03.378 }, 00:18:03.378 { 00:18:03.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.378 "dma_device_type": 2 00:18:03.378 } 00:18:03.378 ], 00:18:03.378 "driver_specific": {} 00:18:03.378 } 00:18:03.378 ] 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.378 "name": "Existed_Raid", 00:18:03.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.378 "strip_size_kb": 64, 00:18:03.378 "state": "configuring", 00:18:03.378 "raid_level": "raid5f", 00:18:03.378 "superblock": false, 00:18:03.378 "num_base_bdevs": 4, 00:18:03.378 "num_base_bdevs_discovered": 1, 00:18:03.378 "num_base_bdevs_operational": 4, 00:18:03.378 "base_bdevs_list": [ 00:18:03.378 { 00:18:03.378 "name": "BaseBdev1", 00:18:03.378 "uuid": "a2415635-6320-489b-8ada-71393ac446e1", 00:18:03.378 "is_configured": true, 00:18:03.378 "data_offset": 0, 00:18:03.378 "data_size": 65536 00:18:03.378 }, 00:18:03.378 { 00:18:03.378 "name": "BaseBdev2", 00:18:03.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.378 "is_configured": false, 00:18:03.378 "data_offset": 0, 00:18:03.378 "data_size": 0 00:18:03.378 }, 00:18:03.378 { 00:18:03.378 "name": "BaseBdev3", 00:18:03.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.378 "is_configured": false, 00:18:03.378 "data_offset": 0, 00:18:03.378 "data_size": 0 00:18:03.378 }, 00:18:03.378 { 00:18:03.378 "name": "BaseBdev4", 00:18:03.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.378 "is_configured": false, 00:18:03.378 "data_offset": 0, 00:18:03.378 "data_size": 0 00:18:03.378 } 00:18:03.378 ] 00:18:03.378 }' 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.378 04:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.949 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:03.949 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.949 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.949 [2024-11-27 04:35:00.274808] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:03.949 [2024-11-27 04:35:00.274870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:03.949 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.949 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:03.949 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.949 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.950 [2024-11-27 04:35:00.286848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:03.950 [2024-11-27 04:35:00.288925] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:03.950 [2024-11-27 04:35:00.289012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:03.950 [2024-11-27 04:35:00.289049] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:03.950 [2024-11-27 04:35:00.289093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:03.950 [2024-11-27 04:35:00.289125] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:03.950 [2024-11-27 04:35:00.289156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:03.950 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.950 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:03.950 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:03.950 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:03.950 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:03.950 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:03.950 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:03.950 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.950 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:03.950 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.950 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.950 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.950 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.950 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.950 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.950 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.950 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.950 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.950 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.950 "name": "Existed_Raid", 00:18:03.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.950 "strip_size_kb": 64, 00:18:03.950 "state": "configuring", 00:18:03.950 "raid_level": "raid5f", 00:18:03.950 "superblock": false, 00:18:03.950 "num_base_bdevs": 4, 00:18:03.950 "num_base_bdevs_discovered": 1, 00:18:03.950 "num_base_bdevs_operational": 4, 00:18:03.950 "base_bdevs_list": [ 00:18:03.950 { 00:18:03.950 "name": "BaseBdev1", 00:18:03.950 "uuid": "a2415635-6320-489b-8ada-71393ac446e1", 00:18:03.950 "is_configured": true, 00:18:03.950 "data_offset": 0, 00:18:03.950 "data_size": 65536 00:18:03.950 }, 00:18:03.950 { 00:18:03.950 "name": "BaseBdev2", 00:18:03.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.950 "is_configured": false, 00:18:03.950 "data_offset": 0, 00:18:03.950 "data_size": 0 00:18:03.950 }, 00:18:03.950 { 00:18:03.950 "name": "BaseBdev3", 00:18:03.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.950 "is_configured": false, 00:18:03.950 "data_offset": 0, 00:18:03.950 "data_size": 0 00:18:03.950 }, 00:18:03.950 { 00:18:03.950 "name": "BaseBdev4", 00:18:03.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.950 "is_configured": false, 00:18:03.950 "data_offset": 0, 00:18:03.950 "data_size": 0 00:18:03.950 } 00:18:03.950 ] 00:18:03.950 }' 00:18:03.950 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.950 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.209 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:04.209 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.209 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.468 [2024-11-27 04:35:00.837246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:04.468 BaseBdev2 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.468 [ 00:18:04.468 { 00:18:04.468 "name": "BaseBdev2", 00:18:04.468 "aliases": [ 00:18:04.468 "604753b3-8101-4e98-ab35-1479b179ef69" 00:18:04.468 ], 00:18:04.468 "product_name": "Malloc disk", 00:18:04.468 "block_size": 512, 00:18:04.468 "num_blocks": 65536, 00:18:04.468 "uuid": "604753b3-8101-4e98-ab35-1479b179ef69", 00:18:04.468 "assigned_rate_limits": { 00:18:04.468 "rw_ios_per_sec": 0, 00:18:04.468 "rw_mbytes_per_sec": 0, 00:18:04.468 "r_mbytes_per_sec": 0, 00:18:04.468 "w_mbytes_per_sec": 0 00:18:04.468 }, 00:18:04.468 "claimed": true, 00:18:04.468 "claim_type": "exclusive_write", 00:18:04.468 "zoned": false, 00:18:04.468 "supported_io_types": { 00:18:04.468 "read": true, 00:18:04.468 "write": true, 00:18:04.468 "unmap": true, 00:18:04.468 "flush": true, 00:18:04.468 "reset": true, 00:18:04.468 "nvme_admin": false, 00:18:04.468 "nvme_io": false, 00:18:04.468 "nvme_io_md": false, 00:18:04.468 "write_zeroes": true, 00:18:04.468 "zcopy": true, 00:18:04.468 "get_zone_info": false, 00:18:04.468 "zone_management": false, 00:18:04.468 "zone_append": false, 00:18:04.468 "compare": false, 00:18:04.468 "compare_and_write": false, 00:18:04.468 "abort": true, 00:18:04.468 "seek_hole": false, 00:18:04.468 "seek_data": false, 00:18:04.468 "copy": true, 00:18:04.468 "nvme_iov_md": false 00:18:04.468 }, 00:18:04.468 "memory_domains": [ 00:18:04.468 { 00:18:04.468 "dma_device_id": "system", 00:18:04.468 "dma_device_type": 1 00:18:04.468 }, 00:18:04.468 { 00:18:04.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.468 "dma_device_type": 2 00:18:04.468 } 00:18:04.468 ], 00:18:04.468 "driver_specific": {} 00:18:04.468 } 00:18:04.468 ] 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.468 "name": "Existed_Raid", 00:18:04.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.468 "strip_size_kb": 64, 00:18:04.468 "state": "configuring", 00:18:04.468 "raid_level": "raid5f", 00:18:04.468 "superblock": false, 00:18:04.468 "num_base_bdevs": 4, 00:18:04.468 "num_base_bdevs_discovered": 2, 00:18:04.468 "num_base_bdevs_operational": 4, 00:18:04.468 "base_bdevs_list": [ 00:18:04.468 { 00:18:04.468 "name": "BaseBdev1", 00:18:04.468 "uuid": "a2415635-6320-489b-8ada-71393ac446e1", 00:18:04.468 "is_configured": true, 00:18:04.468 "data_offset": 0, 00:18:04.468 "data_size": 65536 00:18:04.468 }, 00:18:04.468 { 00:18:04.468 "name": "BaseBdev2", 00:18:04.468 "uuid": "604753b3-8101-4e98-ab35-1479b179ef69", 00:18:04.468 "is_configured": true, 00:18:04.468 "data_offset": 0, 00:18:04.468 "data_size": 65536 00:18:04.468 }, 00:18:04.468 { 00:18:04.468 "name": "BaseBdev3", 00:18:04.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.468 "is_configured": false, 00:18:04.468 "data_offset": 0, 00:18:04.468 "data_size": 0 00:18:04.468 }, 00:18:04.468 { 00:18:04.468 "name": "BaseBdev4", 00:18:04.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.468 "is_configured": false, 00:18:04.468 "data_offset": 0, 00:18:04.468 "data_size": 0 00:18:04.468 } 00:18:04.468 ] 00:18:04.468 }' 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.468 04:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.035 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:05.035 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.035 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.035 [2024-11-27 04:35:01.367865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:05.035 BaseBdev3 00:18:05.035 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.035 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:05.035 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:05.035 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:05.035 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:05.035 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:05.035 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:05.035 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:05.035 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.035 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.035 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.035 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:05.035 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.035 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.035 [ 00:18:05.036 { 00:18:05.036 "name": "BaseBdev3", 00:18:05.036 "aliases": [ 00:18:05.036 "a83d3fe7-376f-4fbe-abe3-171d181a9272" 00:18:05.036 ], 00:18:05.036 "product_name": "Malloc disk", 00:18:05.036 "block_size": 512, 00:18:05.036 "num_blocks": 65536, 00:18:05.036 "uuid": "a83d3fe7-376f-4fbe-abe3-171d181a9272", 00:18:05.036 "assigned_rate_limits": { 00:18:05.036 "rw_ios_per_sec": 0, 00:18:05.036 "rw_mbytes_per_sec": 0, 00:18:05.036 "r_mbytes_per_sec": 0, 00:18:05.036 "w_mbytes_per_sec": 0 00:18:05.036 }, 00:18:05.036 "claimed": true, 00:18:05.036 "claim_type": "exclusive_write", 00:18:05.036 "zoned": false, 00:18:05.036 "supported_io_types": { 00:18:05.036 "read": true, 00:18:05.036 "write": true, 00:18:05.036 "unmap": true, 00:18:05.036 "flush": true, 00:18:05.036 "reset": true, 00:18:05.036 "nvme_admin": false, 00:18:05.036 "nvme_io": false, 00:18:05.036 "nvme_io_md": false, 00:18:05.036 "write_zeroes": true, 00:18:05.036 "zcopy": true, 00:18:05.036 "get_zone_info": false, 00:18:05.036 "zone_management": false, 00:18:05.036 "zone_append": false, 00:18:05.036 "compare": false, 00:18:05.036 "compare_and_write": false, 00:18:05.036 "abort": true, 00:18:05.036 "seek_hole": false, 00:18:05.036 "seek_data": false, 00:18:05.036 "copy": true, 00:18:05.036 "nvme_iov_md": false 00:18:05.036 }, 00:18:05.036 "memory_domains": [ 00:18:05.036 { 00:18:05.036 "dma_device_id": "system", 00:18:05.036 "dma_device_type": 1 00:18:05.036 }, 00:18:05.036 { 00:18:05.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.036 "dma_device_type": 2 00:18:05.036 } 00:18:05.036 ], 00:18:05.036 "driver_specific": {} 00:18:05.036 } 00:18:05.036 ] 00:18:05.036 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.036 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:05.036 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:05.036 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:05.036 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:05.036 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:05.036 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:05.036 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:05.036 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:05.036 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:05.036 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.036 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.036 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.036 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.036 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.036 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.036 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.036 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.036 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.036 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.036 "name": "Existed_Raid", 00:18:05.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.036 "strip_size_kb": 64, 00:18:05.036 "state": "configuring", 00:18:05.036 "raid_level": "raid5f", 00:18:05.036 "superblock": false, 00:18:05.036 "num_base_bdevs": 4, 00:18:05.036 "num_base_bdevs_discovered": 3, 00:18:05.036 "num_base_bdevs_operational": 4, 00:18:05.036 "base_bdevs_list": [ 00:18:05.036 { 00:18:05.036 "name": "BaseBdev1", 00:18:05.036 "uuid": "a2415635-6320-489b-8ada-71393ac446e1", 00:18:05.036 "is_configured": true, 00:18:05.036 "data_offset": 0, 00:18:05.036 "data_size": 65536 00:18:05.036 }, 00:18:05.036 { 00:18:05.036 "name": "BaseBdev2", 00:18:05.036 "uuid": "604753b3-8101-4e98-ab35-1479b179ef69", 00:18:05.036 "is_configured": true, 00:18:05.036 "data_offset": 0, 00:18:05.036 "data_size": 65536 00:18:05.036 }, 00:18:05.036 { 00:18:05.036 "name": "BaseBdev3", 00:18:05.036 "uuid": "a83d3fe7-376f-4fbe-abe3-171d181a9272", 00:18:05.036 "is_configured": true, 00:18:05.036 "data_offset": 0, 00:18:05.036 "data_size": 65536 00:18:05.036 }, 00:18:05.036 { 00:18:05.036 "name": "BaseBdev4", 00:18:05.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.036 "is_configured": false, 00:18:05.036 "data_offset": 0, 00:18:05.036 "data_size": 0 00:18:05.036 } 00:18:05.036 ] 00:18:05.036 }' 00:18:05.036 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.036 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.604 [2024-11-27 04:35:01.947897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:05.604 [2024-11-27 04:35:01.947974] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:05.604 [2024-11-27 04:35:01.947985] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:05.604 [2024-11-27 04:35:01.948299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:05.604 BaseBdev4 00:18:05.604 [2024-11-27 04:35:01.956533] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:05.604 [2024-11-27 04:35:01.956561] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:05.604 [2024-11-27 04:35:01.956882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.604 [ 00:18:05.604 { 00:18:05.604 "name": "BaseBdev4", 00:18:05.604 "aliases": [ 00:18:05.604 "ff0e2a22-6541-46d6-bb91-69e639f008b7" 00:18:05.604 ], 00:18:05.604 "product_name": "Malloc disk", 00:18:05.604 "block_size": 512, 00:18:05.604 "num_blocks": 65536, 00:18:05.604 "uuid": "ff0e2a22-6541-46d6-bb91-69e639f008b7", 00:18:05.604 "assigned_rate_limits": { 00:18:05.604 "rw_ios_per_sec": 0, 00:18:05.604 "rw_mbytes_per_sec": 0, 00:18:05.604 "r_mbytes_per_sec": 0, 00:18:05.604 "w_mbytes_per_sec": 0 00:18:05.604 }, 00:18:05.604 "claimed": true, 00:18:05.604 "claim_type": "exclusive_write", 00:18:05.604 "zoned": false, 00:18:05.604 "supported_io_types": { 00:18:05.604 "read": true, 00:18:05.604 "write": true, 00:18:05.604 "unmap": true, 00:18:05.604 "flush": true, 00:18:05.604 "reset": true, 00:18:05.604 "nvme_admin": false, 00:18:05.604 "nvme_io": false, 00:18:05.604 "nvme_io_md": false, 00:18:05.604 "write_zeroes": true, 00:18:05.604 "zcopy": true, 00:18:05.604 "get_zone_info": false, 00:18:05.604 "zone_management": false, 00:18:05.604 "zone_append": false, 00:18:05.604 "compare": false, 00:18:05.604 "compare_and_write": false, 00:18:05.604 "abort": true, 00:18:05.604 "seek_hole": false, 00:18:05.604 "seek_data": false, 00:18:05.604 "copy": true, 00:18:05.604 "nvme_iov_md": false 00:18:05.604 }, 00:18:05.604 "memory_domains": [ 00:18:05.604 { 00:18:05.604 "dma_device_id": "system", 00:18:05.604 "dma_device_type": 1 00:18:05.604 }, 00:18:05.604 { 00:18:05.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.604 "dma_device_type": 2 00:18:05.604 } 00:18:05.604 ], 00:18:05.604 "driver_specific": {} 00:18:05.604 } 00:18:05.604 ] 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.604 04:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.604 04:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.604 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.604 "name": "Existed_Raid", 00:18:05.604 "uuid": "591d0262-caed-492a-92e8-5cc9fca56085", 00:18:05.604 "strip_size_kb": 64, 00:18:05.604 "state": "online", 00:18:05.604 "raid_level": "raid5f", 00:18:05.604 "superblock": false, 00:18:05.604 "num_base_bdevs": 4, 00:18:05.604 "num_base_bdevs_discovered": 4, 00:18:05.604 "num_base_bdevs_operational": 4, 00:18:05.604 "base_bdevs_list": [ 00:18:05.604 { 00:18:05.604 "name": "BaseBdev1", 00:18:05.604 "uuid": "a2415635-6320-489b-8ada-71393ac446e1", 00:18:05.604 "is_configured": true, 00:18:05.604 "data_offset": 0, 00:18:05.604 "data_size": 65536 00:18:05.604 }, 00:18:05.604 { 00:18:05.604 "name": "BaseBdev2", 00:18:05.604 "uuid": "604753b3-8101-4e98-ab35-1479b179ef69", 00:18:05.604 "is_configured": true, 00:18:05.604 "data_offset": 0, 00:18:05.604 "data_size": 65536 00:18:05.604 }, 00:18:05.604 { 00:18:05.604 "name": "BaseBdev3", 00:18:05.604 "uuid": "a83d3fe7-376f-4fbe-abe3-171d181a9272", 00:18:05.604 "is_configured": true, 00:18:05.604 "data_offset": 0, 00:18:05.604 "data_size": 65536 00:18:05.604 }, 00:18:05.604 { 00:18:05.604 "name": "BaseBdev4", 00:18:05.604 "uuid": "ff0e2a22-6541-46d6-bb91-69e639f008b7", 00:18:05.604 "is_configured": true, 00:18:05.604 "data_offset": 0, 00:18:05.604 "data_size": 65536 00:18:05.604 } 00:18:05.604 ] 00:18:05.604 }' 00:18:05.604 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.604 04:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.863 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:05.863 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:05.863 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:05.863 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:05.863 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:05.863 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:05.863 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:05.863 04:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.863 04:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.864 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:05.864 [2024-11-27 04:35:02.406184] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:05.864 04:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.864 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:05.864 "name": "Existed_Raid", 00:18:05.864 "aliases": [ 00:18:05.864 "591d0262-caed-492a-92e8-5cc9fca56085" 00:18:05.864 ], 00:18:05.864 "product_name": "Raid Volume", 00:18:05.864 "block_size": 512, 00:18:05.864 "num_blocks": 196608, 00:18:05.864 "uuid": "591d0262-caed-492a-92e8-5cc9fca56085", 00:18:05.864 "assigned_rate_limits": { 00:18:05.864 "rw_ios_per_sec": 0, 00:18:05.864 "rw_mbytes_per_sec": 0, 00:18:05.864 "r_mbytes_per_sec": 0, 00:18:05.864 "w_mbytes_per_sec": 0 00:18:05.864 }, 00:18:05.864 "claimed": false, 00:18:05.864 "zoned": false, 00:18:05.864 "supported_io_types": { 00:18:05.864 "read": true, 00:18:05.864 "write": true, 00:18:05.864 "unmap": false, 00:18:05.864 "flush": false, 00:18:05.864 "reset": true, 00:18:05.864 "nvme_admin": false, 00:18:05.864 "nvme_io": false, 00:18:05.864 "nvme_io_md": false, 00:18:05.864 "write_zeroes": true, 00:18:05.864 "zcopy": false, 00:18:05.864 "get_zone_info": false, 00:18:05.864 "zone_management": false, 00:18:05.864 "zone_append": false, 00:18:05.864 "compare": false, 00:18:05.864 "compare_and_write": false, 00:18:05.864 "abort": false, 00:18:05.864 "seek_hole": false, 00:18:05.864 "seek_data": false, 00:18:05.864 "copy": false, 00:18:05.864 "nvme_iov_md": false 00:18:05.864 }, 00:18:05.864 "driver_specific": { 00:18:05.864 "raid": { 00:18:05.864 "uuid": "591d0262-caed-492a-92e8-5cc9fca56085", 00:18:05.864 "strip_size_kb": 64, 00:18:05.864 "state": "online", 00:18:05.864 "raid_level": "raid5f", 00:18:05.864 "superblock": false, 00:18:05.864 "num_base_bdevs": 4, 00:18:05.864 "num_base_bdevs_discovered": 4, 00:18:05.864 "num_base_bdevs_operational": 4, 00:18:05.864 "base_bdevs_list": [ 00:18:05.864 { 00:18:05.864 "name": "BaseBdev1", 00:18:05.864 "uuid": "a2415635-6320-489b-8ada-71393ac446e1", 00:18:05.864 "is_configured": true, 00:18:05.864 "data_offset": 0, 00:18:05.864 "data_size": 65536 00:18:05.864 }, 00:18:05.864 { 00:18:05.864 "name": "BaseBdev2", 00:18:05.864 "uuid": "604753b3-8101-4e98-ab35-1479b179ef69", 00:18:05.864 "is_configured": true, 00:18:05.864 "data_offset": 0, 00:18:05.864 "data_size": 65536 00:18:05.864 }, 00:18:05.864 { 00:18:05.864 "name": "BaseBdev3", 00:18:05.864 "uuid": "a83d3fe7-376f-4fbe-abe3-171d181a9272", 00:18:05.864 "is_configured": true, 00:18:05.864 "data_offset": 0, 00:18:05.864 "data_size": 65536 00:18:05.864 }, 00:18:05.864 { 00:18:05.864 "name": "BaseBdev4", 00:18:05.864 "uuid": "ff0e2a22-6541-46d6-bb91-69e639f008b7", 00:18:05.864 "is_configured": true, 00:18:05.864 "data_offset": 0, 00:18:05.864 "data_size": 65536 00:18:05.864 } 00:18:05.864 ] 00:18:05.864 } 00:18:05.864 } 00:18:05.864 }' 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:06.123 BaseBdev2 00:18:06.123 BaseBdev3 00:18:06.123 BaseBdev4' 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.123 04:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.123 [2024-11-27 04:35:02.701501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:06.383 04:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.383 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:06.383 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:06.383 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:06.383 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:06.383 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:06.383 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:06.383 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:06.383 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.383 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:06.383 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:06.383 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:06.383 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.383 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.383 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.383 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.383 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.383 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.383 04:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.383 04:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.383 04:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.383 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.383 "name": "Existed_Raid", 00:18:06.383 "uuid": "591d0262-caed-492a-92e8-5cc9fca56085", 00:18:06.383 "strip_size_kb": 64, 00:18:06.383 "state": "online", 00:18:06.383 "raid_level": "raid5f", 00:18:06.383 "superblock": false, 00:18:06.383 "num_base_bdevs": 4, 00:18:06.383 "num_base_bdevs_discovered": 3, 00:18:06.383 "num_base_bdevs_operational": 3, 00:18:06.383 "base_bdevs_list": [ 00:18:06.383 { 00:18:06.383 "name": null, 00:18:06.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.383 "is_configured": false, 00:18:06.383 "data_offset": 0, 00:18:06.383 "data_size": 65536 00:18:06.383 }, 00:18:06.383 { 00:18:06.383 "name": "BaseBdev2", 00:18:06.383 "uuid": "604753b3-8101-4e98-ab35-1479b179ef69", 00:18:06.383 "is_configured": true, 00:18:06.383 "data_offset": 0, 00:18:06.383 "data_size": 65536 00:18:06.383 }, 00:18:06.383 { 00:18:06.383 "name": "BaseBdev3", 00:18:06.383 "uuid": "a83d3fe7-376f-4fbe-abe3-171d181a9272", 00:18:06.383 "is_configured": true, 00:18:06.383 "data_offset": 0, 00:18:06.383 "data_size": 65536 00:18:06.383 }, 00:18:06.383 { 00:18:06.383 "name": "BaseBdev4", 00:18:06.383 "uuid": "ff0e2a22-6541-46d6-bb91-69e639f008b7", 00:18:06.383 "is_configured": true, 00:18:06.383 "data_offset": 0, 00:18:06.383 "data_size": 65536 00:18:06.383 } 00:18:06.383 ] 00:18:06.383 }' 00:18:06.383 04:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.383 04:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.644 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:06.644 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:06.644 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.644 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.644 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:06.644 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.908 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.908 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:06.908 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:06.908 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:06.908 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.908 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.908 [2024-11-27 04:35:03.279785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:06.908 [2024-11-27 04:35:03.279897] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:06.908 [2024-11-27 04:35:03.385674] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:06.908 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.908 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:06.908 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:06.908 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.908 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:06.908 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.908 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.908 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.908 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:06.908 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:06.908 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:06.908 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.908 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.908 [2024-11-27 04:35:03.437630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.168 [2024-11-27 04:35:03.598234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:07.168 [2024-11-27 04:35:03.598290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.168 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.428 BaseBdev2 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.428 [ 00:18:07.428 { 00:18:07.428 "name": "BaseBdev2", 00:18:07.428 "aliases": [ 00:18:07.428 "e2bb5234-5514-409e-b94b-88ef2a5823b8" 00:18:07.428 ], 00:18:07.428 "product_name": "Malloc disk", 00:18:07.428 "block_size": 512, 00:18:07.428 "num_blocks": 65536, 00:18:07.428 "uuid": "e2bb5234-5514-409e-b94b-88ef2a5823b8", 00:18:07.428 "assigned_rate_limits": { 00:18:07.428 "rw_ios_per_sec": 0, 00:18:07.428 "rw_mbytes_per_sec": 0, 00:18:07.428 "r_mbytes_per_sec": 0, 00:18:07.428 "w_mbytes_per_sec": 0 00:18:07.428 }, 00:18:07.428 "claimed": false, 00:18:07.428 "zoned": false, 00:18:07.428 "supported_io_types": { 00:18:07.428 "read": true, 00:18:07.428 "write": true, 00:18:07.428 "unmap": true, 00:18:07.428 "flush": true, 00:18:07.428 "reset": true, 00:18:07.428 "nvme_admin": false, 00:18:07.428 "nvme_io": false, 00:18:07.428 "nvme_io_md": false, 00:18:07.428 "write_zeroes": true, 00:18:07.428 "zcopy": true, 00:18:07.428 "get_zone_info": false, 00:18:07.428 "zone_management": false, 00:18:07.428 "zone_append": false, 00:18:07.428 "compare": false, 00:18:07.428 "compare_and_write": false, 00:18:07.428 "abort": true, 00:18:07.428 "seek_hole": false, 00:18:07.428 "seek_data": false, 00:18:07.428 "copy": true, 00:18:07.428 "nvme_iov_md": false 00:18:07.428 }, 00:18:07.428 "memory_domains": [ 00:18:07.428 { 00:18:07.428 "dma_device_id": "system", 00:18:07.428 "dma_device_type": 1 00:18:07.428 }, 00:18:07.428 { 00:18:07.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.428 "dma_device_type": 2 00:18:07.428 } 00:18:07.428 ], 00:18:07.428 "driver_specific": {} 00:18:07.428 } 00:18:07.428 ] 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.428 BaseBdev3 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.428 [ 00:18:07.428 { 00:18:07.428 "name": "BaseBdev3", 00:18:07.428 "aliases": [ 00:18:07.428 "aa193178-cb02-4ac1-8424-5950873e44f4" 00:18:07.428 ], 00:18:07.428 "product_name": "Malloc disk", 00:18:07.428 "block_size": 512, 00:18:07.428 "num_blocks": 65536, 00:18:07.428 "uuid": "aa193178-cb02-4ac1-8424-5950873e44f4", 00:18:07.428 "assigned_rate_limits": { 00:18:07.428 "rw_ios_per_sec": 0, 00:18:07.428 "rw_mbytes_per_sec": 0, 00:18:07.428 "r_mbytes_per_sec": 0, 00:18:07.428 "w_mbytes_per_sec": 0 00:18:07.428 }, 00:18:07.428 "claimed": false, 00:18:07.428 "zoned": false, 00:18:07.428 "supported_io_types": { 00:18:07.428 "read": true, 00:18:07.428 "write": true, 00:18:07.428 "unmap": true, 00:18:07.428 "flush": true, 00:18:07.428 "reset": true, 00:18:07.428 "nvme_admin": false, 00:18:07.428 "nvme_io": false, 00:18:07.428 "nvme_io_md": false, 00:18:07.428 "write_zeroes": true, 00:18:07.428 "zcopy": true, 00:18:07.428 "get_zone_info": false, 00:18:07.428 "zone_management": false, 00:18:07.428 "zone_append": false, 00:18:07.428 "compare": false, 00:18:07.428 "compare_and_write": false, 00:18:07.428 "abort": true, 00:18:07.428 "seek_hole": false, 00:18:07.428 "seek_data": false, 00:18:07.428 "copy": true, 00:18:07.428 "nvme_iov_md": false 00:18:07.428 }, 00:18:07.428 "memory_domains": [ 00:18:07.428 { 00:18:07.428 "dma_device_id": "system", 00:18:07.428 "dma_device_type": 1 00:18:07.428 }, 00:18:07.428 { 00:18:07.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.428 "dma_device_type": 2 00:18:07.428 } 00:18:07.428 ], 00:18:07.428 "driver_specific": {} 00:18:07.428 } 00:18:07.428 ] 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.428 BaseBdev4 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.428 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.428 [ 00:18:07.428 { 00:18:07.428 "name": "BaseBdev4", 00:18:07.428 "aliases": [ 00:18:07.428 "b5a897d3-2652-44dd-ae9d-f2b68c5771b4" 00:18:07.428 ], 00:18:07.428 "product_name": "Malloc disk", 00:18:07.428 "block_size": 512, 00:18:07.428 "num_blocks": 65536, 00:18:07.428 "uuid": "b5a897d3-2652-44dd-ae9d-f2b68c5771b4", 00:18:07.428 "assigned_rate_limits": { 00:18:07.428 "rw_ios_per_sec": 0, 00:18:07.428 "rw_mbytes_per_sec": 0, 00:18:07.428 "r_mbytes_per_sec": 0, 00:18:07.428 "w_mbytes_per_sec": 0 00:18:07.428 }, 00:18:07.428 "claimed": false, 00:18:07.429 "zoned": false, 00:18:07.429 "supported_io_types": { 00:18:07.429 "read": true, 00:18:07.429 "write": true, 00:18:07.429 "unmap": true, 00:18:07.429 "flush": true, 00:18:07.429 "reset": true, 00:18:07.429 "nvme_admin": false, 00:18:07.429 "nvme_io": false, 00:18:07.429 "nvme_io_md": false, 00:18:07.429 "write_zeroes": true, 00:18:07.429 "zcopy": true, 00:18:07.429 "get_zone_info": false, 00:18:07.429 "zone_management": false, 00:18:07.429 "zone_append": false, 00:18:07.429 "compare": false, 00:18:07.429 "compare_and_write": false, 00:18:07.429 "abort": true, 00:18:07.429 "seek_hole": false, 00:18:07.429 "seek_data": false, 00:18:07.429 "copy": true, 00:18:07.429 "nvme_iov_md": false 00:18:07.429 }, 00:18:07.429 "memory_domains": [ 00:18:07.429 { 00:18:07.429 "dma_device_id": "system", 00:18:07.429 "dma_device_type": 1 00:18:07.429 }, 00:18:07.429 { 00:18:07.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.429 "dma_device_type": 2 00:18:07.429 } 00:18:07.429 ], 00:18:07.429 "driver_specific": {} 00:18:07.429 } 00:18:07.429 ] 00:18:07.429 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.429 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:07.429 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:07.429 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:07.429 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:07.429 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.429 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.429 [2024-11-27 04:35:03.994034] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:07.429 [2024-11-27 04:35:03.994131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:07.429 [2024-11-27 04:35:03.994178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:07.429 [2024-11-27 04:35:03.996159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:07.429 [2024-11-27 04:35:03.996262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:07.429 04:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.429 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:07.429 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:07.429 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:07.429 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:07.429 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:07.429 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:07.429 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.429 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.429 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.429 04:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.429 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.429 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.429 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.429 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.688 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.688 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.688 "name": "Existed_Raid", 00:18:07.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.688 "strip_size_kb": 64, 00:18:07.688 "state": "configuring", 00:18:07.688 "raid_level": "raid5f", 00:18:07.688 "superblock": false, 00:18:07.688 "num_base_bdevs": 4, 00:18:07.688 "num_base_bdevs_discovered": 3, 00:18:07.688 "num_base_bdevs_operational": 4, 00:18:07.688 "base_bdevs_list": [ 00:18:07.688 { 00:18:07.688 "name": "BaseBdev1", 00:18:07.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.689 "is_configured": false, 00:18:07.689 "data_offset": 0, 00:18:07.689 "data_size": 0 00:18:07.689 }, 00:18:07.689 { 00:18:07.689 "name": "BaseBdev2", 00:18:07.689 "uuid": "e2bb5234-5514-409e-b94b-88ef2a5823b8", 00:18:07.689 "is_configured": true, 00:18:07.689 "data_offset": 0, 00:18:07.689 "data_size": 65536 00:18:07.689 }, 00:18:07.689 { 00:18:07.689 "name": "BaseBdev3", 00:18:07.689 "uuid": "aa193178-cb02-4ac1-8424-5950873e44f4", 00:18:07.689 "is_configured": true, 00:18:07.689 "data_offset": 0, 00:18:07.689 "data_size": 65536 00:18:07.689 }, 00:18:07.689 { 00:18:07.689 "name": "BaseBdev4", 00:18:07.689 "uuid": "b5a897d3-2652-44dd-ae9d-f2b68c5771b4", 00:18:07.689 "is_configured": true, 00:18:07.689 "data_offset": 0, 00:18:07.689 "data_size": 65536 00:18:07.689 } 00:18:07.689 ] 00:18:07.689 }' 00:18:07.689 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.689 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.948 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:07.948 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.948 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.948 [2024-11-27 04:35:04.393382] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:07.948 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.948 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:07.948 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:07.948 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:07.948 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:07.948 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:07.948 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:07.948 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.948 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.948 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.948 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.948 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.948 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.948 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.948 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.948 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.948 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.948 "name": "Existed_Raid", 00:18:07.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.948 "strip_size_kb": 64, 00:18:07.948 "state": "configuring", 00:18:07.948 "raid_level": "raid5f", 00:18:07.948 "superblock": false, 00:18:07.948 "num_base_bdevs": 4, 00:18:07.948 "num_base_bdevs_discovered": 2, 00:18:07.948 "num_base_bdevs_operational": 4, 00:18:07.948 "base_bdevs_list": [ 00:18:07.948 { 00:18:07.948 "name": "BaseBdev1", 00:18:07.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.948 "is_configured": false, 00:18:07.948 "data_offset": 0, 00:18:07.948 "data_size": 0 00:18:07.948 }, 00:18:07.948 { 00:18:07.948 "name": null, 00:18:07.948 "uuid": "e2bb5234-5514-409e-b94b-88ef2a5823b8", 00:18:07.948 "is_configured": false, 00:18:07.948 "data_offset": 0, 00:18:07.948 "data_size": 65536 00:18:07.948 }, 00:18:07.948 { 00:18:07.948 "name": "BaseBdev3", 00:18:07.948 "uuid": "aa193178-cb02-4ac1-8424-5950873e44f4", 00:18:07.948 "is_configured": true, 00:18:07.948 "data_offset": 0, 00:18:07.948 "data_size": 65536 00:18:07.948 }, 00:18:07.948 { 00:18:07.948 "name": "BaseBdev4", 00:18:07.948 "uuid": "b5a897d3-2652-44dd-ae9d-f2b68c5771b4", 00:18:07.948 "is_configured": true, 00:18:07.948 "data_offset": 0, 00:18:07.948 "data_size": 65536 00:18:07.948 } 00:18:07.948 ] 00:18:07.948 }' 00:18:07.948 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.948 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.516 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:08.516 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.516 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.516 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.516 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.517 [2024-11-27 04:35:04.941090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:08.517 BaseBdev1 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.517 [ 00:18:08.517 { 00:18:08.517 "name": "BaseBdev1", 00:18:08.517 "aliases": [ 00:18:08.517 "d55d19a8-f8e8-48b6-bd56-ee64a8d4f7fd" 00:18:08.517 ], 00:18:08.517 "product_name": "Malloc disk", 00:18:08.517 "block_size": 512, 00:18:08.517 "num_blocks": 65536, 00:18:08.517 "uuid": "d55d19a8-f8e8-48b6-bd56-ee64a8d4f7fd", 00:18:08.517 "assigned_rate_limits": { 00:18:08.517 "rw_ios_per_sec": 0, 00:18:08.517 "rw_mbytes_per_sec": 0, 00:18:08.517 "r_mbytes_per_sec": 0, 00:18:08.517 "w_mbytes_per_sec": 0 00:18:08.517 }, 00:18:08.517 "claimed": true, 00:18:08.517 "claim_type": "exclusive_write", 00:18:08.517 "zoned": false, 00:18:08.517 "supported_io_types": { 00:18:08.517 "read": true, 00:18:08.517 "write": true, 00:18:08.517 "unmap": true, 00:18:08.517 "flush": true, 00:18:08.517 "reset": true, 00:18:08.517 "nvme_admin": false, 00:18:08.517 "nvme_io": false, 00:18:08.517 "nvme_io_md": false, 00:18:08.517 "write_zeroes": true, 00:18:08.517 "zcopy": true, 00:18:08.517 "get_zone_info": false, 00:18:08.517 "zone_management": false, 00:18:08.517 "zone_append": false, 00:18:08.517 "compare": false, 00:18:08.517 "compare_and_write": false, 00:18:08.517 "abort": true, 00:18:08.517 "seek_hole": false, 00:18:08.517 "seek_data": false, 00:18:08.517 "copy": true, 00:18:08.517 "nvme_iov_md": false 00:18:08.517 }, 00:18:08.517 "memory_domains": [ 00:18:08.517 { 00:18:08.517 "dma_device_id": "system", 00:18:08.517 "dma_device_type": 1 00:18:08.517 }, 00:18:08.517 { 00:18:08.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.517 "dma_device_type": 2 00:18:08.517 } 00:18:08.517 ], 00:18:08.517 "driver_specific": {} 00:18:08.517 } 00:18:08.517 ] 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.517 04:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.517 04:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.517 04:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.517 "name": "Existed_Raid", 00:18:08.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.517 "strip_size_kb": 64, 00:18:08.517 "state": "configuring", 00:18:08.517 "raid_level": "raid5f", 00:18:08.517 "superblock": false, 00:18:08.517 "num_base_bdevs": 4, 00:18:08.517 "num_base_bdevs_discovered": 3, 00:18:08.517 "num_base_bdevs_operational": 4, 00:18:08.517 "base_bdevs_list": [ 00:18:08.517 { 00:18:08.517 "name": "BaseBdev1", 00:18:08.517 "uuid": "d55d19a8-f8e8-48b6-bd56-ee64a8d4f7fd", 00:18:08.517 "is_configured": true, 00:18:08.517 "data_offset": 0, 00:18:08.517 "data_size": 65536 00:18:08.517 }, 00:18:08.517 { 00:18:08.517 "name": null, 00:18:08.517 "uuid": "e2bb5234-5514-409e-b94b-88ef2a5823b8", 00:18:08.517 "is_configured": false, 00:18:08.517 "data_offset": 0, 00:18:08.517 "data_size": 65536 00:18:08.517 }, 00:18:08.517 { 00:18:08.517 "name": "BaseBdev3", 00:18:08.517 "uuid": "aa193178-cb02-4ac1-8424-5950873e44f4", 00:18:08.517 "is_configured": true, 00:18:08.517 "data_offset": 0, 00:18:08.517 "data_size": 65536 00:18:08.517 }, 00:18:08.517 { 00:18:08.517 "name": "BaseBdev4", 00:18:08.517 "uuid": "b5a897d3-2652-44dd-ae9d-f2b68c5771b4", 00:18:08.517 "is_configured": true, 00:18:08.517 "data_offset": 0, 00:18:08.517 "data_size": 65536 00:18:08.517 } 00:18:08.517 ] 00:18:08.517 }' 00:18:08.517 04:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.517 04:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.088 04:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.088 04:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.088 04:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.088 04:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:09.088 04:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.088 04:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:09.088 04:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:09.088 04:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.088 04:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.088 [2024-11-27 04:35:05.520288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:09.088 04:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.088 04:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:09.088 04:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:09.088 04:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:09.088 04:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:09.088 04:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:09.088 04:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:09.088 04:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.088 04:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.088 04:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.088 04:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.088 04:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.088 04:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.088 04:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.088 04:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.088 04:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.088 04:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.088 "name": "Existed_Raid", 00:18:09.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.088 "strip_size_kb": 64, 00:18:09.088 "state": "configuring", 00:18:09.088 "raid_level": "raid5f", 00:18:09.088 "superblock": false, 00:18:09.088 "num_base_bdevs": 4, 00:18:09.088 "num_base_bdevs_discovered": 2, 00:18:09.088 "num_base_bdevs_operational": 4, 00:18:09.088 "base_bdevs_list": [ 00:18:09.088 { 00:18:09.088 "name": "BaseBdev1", 00:18:09.088 "uuid": "d55d19a8-f8e8-48b6-bd56-ee64a8d4f7fd", 00:18:09.088 "is_configured": true, 00:18:09.088 "data_offset": 0, 00:18:09.088 "data_size": 65536 00:18:09.088 }, 00:18:09.088 { 00:18:09.088 "name": null, 00:18:09.088 "uuid": "e2bb5234-5514-409e-b94b-88ef2a5823b8", 00:18:09.088 "is_configured": false, 00:18:09.088 "data_offset": 0, 00:18:09.088 "data_size": 65536 00:18:09.088 }, 00:18:09.088 { 00:18:09.088 "name": null, 00:18:09.088 "uuid": "aa193178-cb02-4ac1-8424-5950873e44f4", 00:18:09.088 "is_configured": false, 00:18:09.088 "data_offset": 0, 00:18:09.088 "data_size": 65536 00:18:09.088 }, 00:18:09.088 { 00:18:09.088 "name": "BaseBdev4", 00:18:09.089 "uuid": "b5a897d3-2652-44dd-ae9d-f2b68c5771b4", 00:18:09.089 "is_configured": true, 00:18:09.089 "data_offset": 0, 00:18:09.089 "data_size": 65536 00:18:09.089 } 00:18:09.089 ] 00:18:09.089 }' 00:18:09.089 04:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.089 04:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.657 04:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.657 04:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:09.657 04:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.657 04:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.657 04:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.657 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:09.657 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:09.657 04:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.657 04:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.657 [2024-11-27 04:35:06.027408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:09.657 04:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.657 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:09.657 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:09.657 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:09.657 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:09.657 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:09.657 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:09.657 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.657 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.657 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.657 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.657 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.657 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.657 04:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.657 04:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.657 04:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.657 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.657 "name": "Existed_Raid", 00:18:09.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.657 "strip_size_kb": 64, 00:18:09.657 "state": "configuring", 00:18:09.657 "raid_level": "raid5f", 00:18:09.657 "superblock": false, 00:18:09.657 "num_base_bdevs": 4, 00:18:09.657 "num_base_bdevs_discovered": 3, 00:18:09.657 "num_base_bdevs_operational": 4, 00:18:09.657 "base_bdevs_list": [ 00:18:09.657 { 00:18:09.657 "name": "BaseBdev1", 00:18:09.657 "uuid": "d55d19a8-f8e8-48b6-bd56-ee64a8d4f7fd", 00:18:09.657 "is_configured": true, 00:18:09.657 "data_offset": 0, 00:18:09.657 "data_size": 65536 00:18:09.657 }, 00:18:09.657 { 00:18:09.657 "name": null, 00:18:09.657 "uuid": "e2bb5234-5514-409e-b94b-88ef2a5823b8", 00:18:09.657 "is_configured": false, 00:18:09.657 "data_offset": 0, 00:18:09.657 "data_size": 65536 00:18:09.657 }, 00:18:09.657 { 00:18:09.657 "name": "BaseBdev3", 00:18:09.657 "uuid": "aa193178-cb02-4ac1-8424-5950873e44f4", 00:18:09.657 "is_configured": true, 00:18:09.657 "data_offset": 0, 00:18:09.657 "data_size": 65536 00:18:09.657 }, 00:18:09.657 { 00:18:09.657 "name": "BaseBdev4", 00:18:09.657 "uuid": "b5a897d3-2652-44dd-ae9d-f2b68c5771b4", 00:18:09.657 "is_configured": true, 00:18:09.657 "data_offset": 0, 00:18:09.657 "data_size": 65536 00:18:09.657 } 00:18:09.657 ] 00:18:09.657 }' 00:18:09.657 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.657 04:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.225 [2024-11-27 04:35:06.574554] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.225 "name": "Existed_Raid", 00:18:10.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.225 "strip_size_kb": 64, 00:18:10.225 "state": "configuring", 00:18:10.225 "raid_level": "raid5f", 00:18:10.225 "superblock": false, 00:18:10.225 "num_base_bdevs": 4, 00:18:10.225 "num_base_bdevs_discovered": 2, 00:18:10.225 "num_base_bdevs_operational": 4, 00:18:10.225 "base_bdevs_list": [ 00:18:10.225 { 00:18:10.225 "name": null, 00:18:10.225 "uuid": "d55d19a8-f8e8-48b6-bd56-ee64a8d4f7fd", 00:18:10.225 "is_configured": false, 00:18:10.225 "data_offset": 0, 00:18:10.225 "data_size": 65536 00:18:10.225 }, 00:18:10.225 { 00:18:10.225 "name": null, 00:18:10.225 "uuid": "e2bb5234-5514-409e-b94b-88ef2a5823b8", 00:18:10.225 "is_configured": false, 00:18:10.225 "data_offset": 0, 00:18:10.225 "data_size": 65536 00:18:10.225 }, 00:18:10.225 { 00:18:10.225 "name": "BaseBdev3", 00:18:10.225 "uuid": "aa193178-cb02-4ac1-8424-5950873e44f4", 00:18:10.225 "is_configured": true, 00:18:10.225 "data_offset": 0, 00:18:10.225 "data_size": 65536 00:18:10.225 }, 00:18:10.225 { 00:18:10.225 "name": "BaseBdev4", 00:18:10.225 "uuid": "b5a897d3-2652-44dd-ae9d-f2b68c5771b4", 00:18:10.225 "is_configured": true, 00:18:10.225 "data_offset": 0, 00:18:10.225 "data_size": 65536 00:18:10.225 } 00:18:10.225 ] 00:18:10.225 }' 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.225 04:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.795 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.795 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.795 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.795 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:10.795 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.795 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:10.795 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:10.795 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.795 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.795 [2024-11-27 04:35:07.194668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:10.795 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.795 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:10.795 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:10.795 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:10.795 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:10.795 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:10.795 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:10.795 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.795 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.795 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.795 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.795 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.795 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.796 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.796 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.796 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.796 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.796 "name": "Existed_Raid", 00:18:10.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.796 "strip_size_kb": 64, 00:18:10.796 "state": "configuring", 00:18:10.796 "raid_level": "raid5f", 00:18:10.796 "superblock": false, 00:18:10.796 "num_base_bdevs": 4, 00:18:10.796 "num_base_bdevs_discovered": 3, 00:18:10.796 "num_base_bdevs_operational": 4, 00:18:10.796 "base_bdevs_list": [ 00:18:10.796 { 00:18:10.796 "name": null, 00:18:10.796 "uuid": "d55d19a8-f8e8-48b6-bd56-ee64a8d4f7fd", 00:18:10.796 "is_configured": false, 00:18:10.796 "data_offset": 0, 00:18:10.796 "data_size": 65536 00:18:10.796 }, 00:18:10.796 { 00:18:10.796 "name": "BaseBdev2", 00:18:10.796 "uuid": "e2bb5234-5514-409e-b94b-88ef2a5823b8", 00:18:10.796 "is_configured": true, 00:18:10.796 "data_offset": 0, 00:18:10.796 "data_size": 65536 00:18:10.796 }, 00:18:10.796 { 00:18:10.796 "name": "BaseBdev3", 00:18:10.796 "uuid": "aa193178-cb02-4ac1-8424-5950873e44f4", 00:18:10.796 "is_configured": true, 00:18:10.796 "data_offset": 0, 00:18:10.796 "data_size": 65536 00:18:10.796 }, 00:18:10.796 { 00:18:10.796 "name": "BaseBdev4", 00:18:10.796 "uuid": "b5a897d3-2652-44dd-ae9d-f2b68c5771b4", 00:18:10.796 "is_configured": true, 00:18:10.796 "data_offset": 0, 00:18:10.796 "data_size": 65536 00:18:10.796 } 00:18:10.796 ] 00:18:10.796 }' 00:18:10.796 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.796 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.056 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.056 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.056 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.056 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:11.056 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.316 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:11.316 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:11.316 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.316 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.316 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.316 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.316 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d55d19a8-f8e8-48b6-bd56-ee64a8d4f7fd 00:18:11.316 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.316 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.316 [2024-11-27 04:35:07.719728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:11.316 [2024-11-27 04:35:07.719801] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:11.316 [2024-11-27 04:35:07.719810] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:11.316 [2024-11-27 04:35:07.720138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:11.316 [2024-11-27 04:35:07.727923] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:11.316 [2024-11-27 04:35:07.727959] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:11.316 [2024-11-27 04:35:07.728308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.316 NewBaseBdev 00:18:11.316 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.316 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:11.316 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:18:11.316 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:11.316 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:11.316 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:11.316 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.317 [ 00:18:11.317 { 00:18:11.317 "name": "NewBaseBdev", 00:18:11.317 "aliases": [ 00:18:11.317 "d55d19a8-f8e8-48b6-bd56-ee64a8d4f7fd" 00:18:11.317 ], 00:18:11.317 "product_name": "Malloc disk", 00:18:11.317 "block_size": 512, 00:18:11.317 "num_blocks": 65536, 00:18:11.317 "uuid": "d55d19a8-f8e8-48b6-bd56-ee64a8d4f7fd", 00:18:11.317 "assigned_rate_limits": { 00:18:11.317 "rw_ios_per_sec": 0, 00:18:11.317 "rw_mbytes_per_sec": 0, 00:18:11.317 "r_mbytes_per_sec": 0, 00:18:11.317 "w_mbytes_per_sec": 0 00:18:11.317 }, 00:18:11.317 "claimed": true, 00:18:11.317 "claim_type": "exclusive_write", 00:18:11.317 "zoned": false, 00:18:11.317 "supported_io_types": { 00:18:11.317 "read": true, 00:18:11.317 "write": true, 00:18:11.317 "unmap": true, 00:18:11.317 "flush": true, 00:18:11.317 "reset": true, 00:18:11.317 "nvme_admin": false, 00:18:11.317 "nvme_io": false, 00:18:11.317 "nvme_io_md": false, 00:18:11.317 "write_zeroes": true, 00:18:11.317 "zcopy": true, 00:18:11.317 "get_zone_info": false, 00:18:11.317 "zone_management": false, 00:18:11.317 "zone_append": false, 00:18:11.317 "compare": false, 00:18:11.317 "compare_and_write": false, 00:18:11.317 "abort": true, 00:18:11.317 "seek_hole": false, 00:18:11.317 "seek_data": false, 00:18:11.317 "copy": true, 00:18:11.317 "nvme_iov_md": false 00:18:11.317 }, 00:18:11.317 "memory_domains": [ 00:18:11.317 { 00:18:11.317 "dma_device_id": "system", 00:18:11.317 "dma_device_type": 1 00:18:11.317 }, 00:18:11.317 { 00:18:11.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.317 "dma_device_type": 2 00:18:11.317 } 00:18:11.317 ], 00:18:11.317 "driver_specific": {} 00:18:11.317 } 00:18:11.317 ] 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.317 "name": "Existed_Raid", 00:18:11.317 "uuid": "a0df30b1-d2af-4dc9-907a-ad590dac1c8e", 00:18:11.317 "strip_size_kb": 64, 00:18:11.317 "state": "online", 00:18:11.317 "raid_level": "raid5f", 00:18:11.317 "superblock": false, 00:18:11.317 "num_base_bdevs": 4, 00:18:11.317 "num_base_bdevs_discovered": 4, 00:18:11.317 "num_base_bdevs_operational": 4, 00:18:11.317 "base_bdevs_list": [ 00:18:11.317 { 00:18:11.317 "name": "NewBaseBdev", 00:18:11.317 "uuid": "d55d19a8-f8e8-48b6-bd56-ee64a8d4f7fd", 00:18:11.317 "is_configured": true, 00:18:11.317 "data_offset": 0, 00:18:11.317 "data_size": 65536 00:18:11.317 }, 00:18:11.317 { 00:18:11.317 "name": "BaseBdev2", 00:18:11.317 "uuid": "e2bb5234-5514-409e-b94b-88ef2a5823b8", 00:18:11.317 "is_configured": true, 00:18:11.317 "data_offset": 0, 00:18:11.317 "data_size": 65536 00:18:11.317 }, 00:18:11.317 { 00:18:11.317 "name": "BaseBdev3", 00:18:11.317 "uuid": "aa193178-cb02-4ac1-8424-5950873e44f4", 00:18:11.317 "is_configured": true, 00:18:11.317 "data_offset": 0, 00:18:11.317 "data_size": 65536 00:18:11.317 }, 00:18:11.317 { 00:18:11.317 "name": "BaseBdev4", 00:18:11.317 "uuid": "b5a897d3-2652-44dd-ae9d-f2b68c5771b4", 00:18:11.317 "is_configured": true, 00:18:11.317 "data_offset": 0, 00:18:11.317 "data_size": 65536 00:18:11.317 } 00:18:11.317 ] 00:18:11.317 }' 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.317 04:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.886 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:11.886 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:11.886 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:11.886 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:11.886 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:11.886 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:11.886 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:11.886 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.886 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:11.886 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.886 [2024-11-27 04:35:08.234008] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:11.886 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.886 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:11.886 "name": "Existed_Raid", 00:18:11.886 "aliases": [ 00:18:11.886 "a0df30b1-d2af-4dc9-907a-ad590dac1c8e" 00:18:11.886 ], 00:18:11.886 "product_name": "Raid Volume", 00:18:11.886 "block_size": 512, 00:18:11.886 "num_blocks": 196608, 00:18:11.886 "uuid": "a0df30b1-d2af-4dc9-907a-ad590dac1c8e", 00:18:11.886 "assigned_rate_limits": { 00:18:11.886 "rw_ios_per_sec": 0, 00:18:11.886 "rw_mbytes_per_sec": 0, 00:18:11.886 "r_mbytes_per_sec": 0, 00:18:11.886 "w_mbytes_per_sec": 0 00:18:11.886 }, 00:18:11.886 "claimed": false, 00:18:11.886 "zoned": false, 00:18:11.886 "supported_io_types": { 00:18:11.886 "read": true, 00:18:11.886 "write": true, 00:18:11.886 "unmap": false, 00:18:11.886 "flush": false, 00:18:11.886 "reset": true, 00:18:11.886 "nvme_admin": false, 00:18:11.886 "nvme_io": false, 00:18:11.886 "nvme_io_md": false, 00:18:11.886 "write_zeroes": true, 00:18:11.886 "zcopy": false, 00:18:11.886 "get_zone_info": false, 00:18:11.886 "zone_management": false, 00:18:11.886 "zone_append": false, 00:18:11.886 "compare": false, 00:18:11.886 "compare_and_write": false, 00:18:11.886 "abort": false, 00:18:11.886 "seek_hole": false, 00:18:11.886 "seek_data": false, 00:18:11.886 "copy": false, 00:18:11.886 "nvme_iov_md": false 00:18:11.886 }, 00:18:11.886 "driver_specific": { 00:18:11.886 "raid": { 00:18:11.886 "uuid": "a0df30b1-d2af-4dc9-907a-ad590dac1c8e", 00:18:11.886 "strip_size_kb": 64, 00:18:11.886 "state": "online", 00:18:11.886 "raid_level": "raid5f", 00:18:11.886 "superblock": false, 00:18:11.886 "num_base_bdevs": 4, 00:18:11.886 "num_base_bdevs_discovered": 4, 00:18:11.886 "num_base_bdevs_operational": 4, 00:18:11.886 "base_bdevs_list": [ 00:18:11.886 { 00:18:11.886 "name": "NewBaseBdev", 00:18:11.886 "uuid": "d55d19a8-f8e8-48b6-bd56-ee64a8d4f7fd", 00:18:11.886 "is_configured": true, 00:18:11.886 "data_offset": 0, 00:18:11.886 "data_size": 65536 00:18:11.886 }, 00:18:11.886 { 00:18:11.886 "name": "BaseBdev2", 00:18:11.886 "uuid": "e2bb5234-5514-409e-b94b-88ef2a5823b8", 00:18:11.886 "is_configured": true, 00:18:11.886 "data_offset": 0, 00:18:11.886 "data_size": 65536 00:18:11.886 }, 00:18:11.886 { 00:18:11.886 "name": "BaseBdev3", 00:18:11.886 "uuid": "aa193178-cb02-4ac1-8424-5950873e44f4", 00:18:11.886 "is_configured": true, 00:18:11.886 "data_offset": 0, 00:18:11.886 "data_size": 65536 00:18:11.886 }, 00:18:11.886 { 00:18:11.886 "name": "BaseBdev4", 00:18:11.886 "uuid": "b5a897d3-2652-44dd-ae9d-f2b68c5771b4", 00:18:11.886 "is_configured": true, 00:18:11.886 "data_offset": 0, 00:18:11.886 "data_size": 65536 00:18:11.886 } 00:18:11.886 ] 00:18:11.886 } 00:18:11.886 } 00:18:11.886 }' 00:18:11.886 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:11.886 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:11.886 BaseBdev2 00:18:11.886 BaseBdev3 00:18:11.886 BaseBdev4' 00:18:11.886 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.886 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:11.886 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:11.886 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:11.886 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.886 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.887 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.887 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.887 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:11.887 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:11.887 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:11.887 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:11.887 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.887 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.887 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.887 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.146 [2024-11-27 04:35:08.589150] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:12.146 [2024-11-27 04:35:08.589181] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:12.146 [2024-11-27 04:35:08.589274] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:12.146 [2024-11-27 04:35:08.589606] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:12.146 [2024-11-27 04:35:08.589619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83123 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83123 ']' 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83123 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83123 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:12.146 killing process with pid 83123 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83123' 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83123 00:18:12.146 [2024-11-27 04:35:08.638724] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:12.146 04:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83123 00:18:12.714 [2024-11-27 04:35:09.054700] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:14.092 04:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:14.092 00:18:14.092 real 0m12.106s 00:18:14.093 user 0m19.193s 00:18:14.093 sys 0m2.214s 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:14.093 ************************************ 00:18:14.093 END TEST raid5f_state_function_test 00:18:14.093 ************************************ 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.093 04:35:10 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:18:14.093 04:35:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:14.093 04:35:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:14.093 04:35:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:14.093 ************************************ 00:18:14.093 START TEST raid5f_state_function_test_sb 00:18:14.093 ************************************ 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:14.093 Process raid pid: 83796 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83796 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83796' 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83796 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83796 ']' 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.093 04:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.093 [2024-11-27 04:35:10.435793] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:14.093 [2024-11-27 04:35:10.436036] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.093 [2024-11-27 04:35:10.613353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.353 [2024-11-27 04:35:10.732407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.612 [2024-11-27 04:35:10.948407] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:14.612 [2024-11-27 04:35:10.948531] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:14.873 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.873 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:14.873 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:14.873 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.873 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.873 [2024-11-27 04:35:11.339002] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:14.873 [2024-11-27 04:35:11.339064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:14.873 [2024-11-27 04:35:11.339077] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:14.873 [2024-11-27 04:35:11.339097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:14.873 [2024-11-27 04:35:11.339105] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:14.873 [2024-11-27 04:35:11.339115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:14.873 [2024-11-27 04:35:11.339122] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:14.873 [2024-11-27 04:35:11.339131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:14.873 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.873 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:14.873 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.873 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:14.873 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:14.873 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:14.873 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:14.873 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.873 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.873 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.873 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.873 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.873 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.873 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.873 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.873 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.873 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.873 "name": "Existed_Raid", 00:18:14.873 "uuid": "e80d94c8-ee53-4982-b59f-44b09de61ea0", 00:18:14.873 "strip_size_kb": 64, 00:18:14.873 "state": "configuring", 00:18:14.873 "raid_level": "raid5f", 00:18:14.873 "superblock": true, 00:18:14.873 "num_base_bdevs": 4, 00:18:14.874 "num_base_bdevs_discovered": 0, 00:18:14.874 "num_base_bdevs_operational": 4, 00:18:14.874 "base_bdevs_list": [ 00:18:14.874 { 00:18:14.874 "name": "BaseBdev1", 00:18:14.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.874 "is_configured": false, 00:18:14.874 "data_offset": 0, 00:18:14.874 "data_size": 0 00:18:14.874 }, 00:18:14.874 { 00:18:14.874 "name": "BaseBdev2", 00:18:14.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.874 "is_configured": false, 00:18:14.874 "data_offset": 0, 00:18:14.874 "data_size": 0 00:18:14.874 }, 00:18:14.874 { 00:18:14.874 "name": "BaseBdev3", 00:18:14.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.874 "is_configured": false, 00:18:14.874 "data_offset": 0, 00:18:14.874 "data_size": 0 00:18:14.874 }, 00:18:14.874 { 00:18:14.874 "name": "BaseBdev4", 00:18:14.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.874 "is_configured": false, 00:18:14.874 "data_offset": 0, 00:18:14.874 "data_size": 0 00:18:14.874 } 00:18:14.874 ] 00:18:14.874 }' 00:18:14.874 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.874 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.444 [2024-11-27 04:35:11.794165] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:15.444 [2024-11-27 04:35:11.794273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.444 [2024-11-27 04:35:11.806171] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:15.444 [2024-11-27 04:35:11.806266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:15.444 [2024-11-27 04:35:11.806299] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:15.444 [2024-11-27 04:35:11.806322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:15.444 [2024-11-27 04:35:11.806392] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:15.444 [2024-11-27 04:35:11.806416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:15.444 [2024-11-27 04:35:11.806443] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:15.444 [2024-11-27 04:35:11.806465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.444 [2024-11-27 04:35:11.856055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:15.444 BaseBdev1 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.444 [ 00:18:15.444 { 00:18:15.444 "name": "BaseBdev1", 00:18:15.444 "aliases": [ 00:18:15.444 "f9118186-8197-4d3d-a6d3-fb1a937c45b4" 00:18:15.444 ], 00:18:15.444 "product_name": "Malloc disk", 00:18:15.444 "block_size": 512, 00:18:15.444 "num_blocks": 65536, 00:18:15.444 "uuid": "f9118186-8197-4d3d-a6d3-fb1a937c45b4", 00:18:15.444 "assigned_rate_limits": { 00:18:15.444 "rw_ios_per_sec": 0, 00:18:15.444 "rw_mbytes_per_sec": 0, 00:18:15.444 "r_mbytes_per_sec": 0, 00:18:15.444 "w_mbytes_per_sec": 0 00:18:15.444 }, 00:18:15.444 "claimed": true, 00:18:15.444 "claim_type": "exclusive_write", 00:18:15.444 "zoned": false, 00:18:15.444 "supported_io_types": { 00:18:15.444 "read": true, 00:18:15.444 "write": true, 00:18:15.444 "unmap": true, 00:18:15.444 "flush": true, 00:18:15.444 "reset": true, 00:18:15.444 "nvme_admin": false, 00:18:15.444 "nvme_io": false, 00:18:15.444 "nvme_io_md": false, 00:18:15.444 "write_zeroes": true, 00:18:15.444 "zcopy": true, 00:18:15.444 "get_zone_info": false, 00:18:15.444 "zone_management": false, 00:18:15.444 "zone_append": false, 00:18:15.444 "compare": false, 00:18:15.444 "compare_and_write": false, 00:18:15.444 "abort": true, 00:18:15.444 "seek_hole": false, 00:18:15.444 "seek_data": false, 00:18:15.444 "copy": true, 00:18:15.444 "nvme_iov_md": false 00:18:15.444 }, 00:18:15.444 "memory_domains": [ 00:18:15.444 { 00:18:15.444 "dma_device_id": "system", 00:18:15.444 "dma_device_type": 1 00:18:15.444 }, 00:18:15.444 { 00:18:15.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.444 "dma_device_type": 2 00:18:15.444 } 00:18:15.444 ], 00:18:15.444 "driver_specific": {} 00:18:15.444 } 00:18:15.444 ] 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:15.444 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:15.445 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:15.445 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:15.445 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.445 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.445 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.445 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.445 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.445 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.445 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.445 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.445 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.445 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.445 "name": "Existed_Raid", 00:18:15.445 "uuid": "19875970-463b-465a-aaaa-58db2bbf9a92", 00:18:15.445 "strip_size_kb": 64, 00:18:15.445 "state": "configuring", 00:18:15.445 "raid_level": "raid5f", 00:18:15.445 "superblock": true, 00:18:15.445 "num_base_bdevs": 4, 00:18:15.445 "num_base_bdevs_discovered": 1, 00:18:15.445 "num_base_bdevs_operational": 4, 00:18:15.445 "base_bdevs_list": [ 00:18:15.445 { 00:18:15.445 "name": "BaseBdev1", 00:18:15.445 "uuid": "f9118186-8197-4d3d-a6d3-fb1a937c45b4", 00:18:15.445 "is_configured": true, 00:18:15.445 "data_offset": 2048, 00:18:15.445 "data_size": 63488 00:18:15.445 }, 00:18:15.445 { 00:18:15.445 "name": "BaseBdev2", 00:18:15.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.445 "is_configured": false, 00:18:15.445 "data_offset": 0, 00:18:15.445 "data_size": 0 00:18:15.445 }, 00:18:15.445 { 00:18:15.445 "name": "BaseBdev3", 00:18:15.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.445 "is_configured": false, 00:18:15.445 "data_offset": 0, 00:18:15.445 "data_size": 0 00:18:15.445 }, 00:18:15.445 { 00:18:15.445 "name": "BaseBdev4", 00:18:15.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.445 "is_configured": false, 00:18:15.445 "data_offset": 0, 00:18:15.445 "data_size": 0 00:18:15.445 } 00:18:15.445 ] 00:18:15.445 }' 00:18:15.445 04:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.445 04:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.015 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:16.015 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.015 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.015 [2024-11-27 04:35:12.359289] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:16.015 [2024-11-27 04:35:12.359352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:16.015 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.015 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:16.015 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.015 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.015 [2024-11-27 04:35:12.367355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:16.015 [2024-11-27 04:35:12.369355] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:16.015 [2024-11-27 04:35:12.369400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:16.015 [2024-11-27 04:35:12.369412] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:16.015 [2024-11-27 04:35:12.369425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:16.015 [2024-11-27 04:35:12.369433] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:16.015 [2024-11-27 04:35:12.369443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:16.015 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.015 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:16.015 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:16.015 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:16.015 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:16.015 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:16.015 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:16.015 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:16.015 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:16.015 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.015 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.015 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.015 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.015 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.015 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.015 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.015 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.015 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.015 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.015 "name": "Existed_Raid", 00:18:16.015 "uuid": "3cdbe458-8c66-4ef0-881c-c4cd8174e118", 00:18:16.015 "strip_size_kb": 64, 00:18:16.015 "state": "configuring", 00:18:16.015 "raid_level": "raid5f", 00:18:16.015 "superblock": true, 00:18:16.015 "num_base_bdevs": 4, 00:18:16.015 "num_base_bdevs_discovered": 1, 00:18:16.015 "num_base_bdevs_operational": 4, 00:18:16.015 "base_bdevs_list": [ 00:18:16.015 { 00:18:16.015 "name": "BaseBdev1", 00:18:16.015 "uuid": "f9118186-8197-4d3d-a6d3-fb1a937c45b4", 00:18:16.015 "is_configured": true, 00:18:16.015 "data_offset": 2048, 00:18:16.015 "data_size": 63488 00:18:16.015 }, 00:18:16.015 { 00:18:16.015 "name": "BaseBdev2", 00:18:16.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.015 "is_configured": false, 00:18:16.015 "data_offset": 0, 00:18:16.015 "data_size": 0 00:18:16.015 }, 00:18:16.015 { 00:18:16.016 "name": "BaseBdev3", 00:18:16.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.016 "is_configured": false, 00:18:16.016 "data_offset": 0, 00:18:16.016 "data_size": 0 00:18:16.016 }, 00:18:16.016 { 00:18:16.016 "name": "BaseBdev4", 00:18:16.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.016 "is_configured": false, 00:18:16.016 "data_offset": 0, 00:18:16.016 "data_size": 0 00:18:16.016 } 00:18:16.016 ] 00:18:16.016 }' 00:18:16.016 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.016 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.276 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:16.276 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.276 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.536 [2024-11-27 04:35:12.890045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:16.536 BaseBdev2 00:18:16.536 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.536 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:16.536 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:16.536 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:16.536 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:16.536 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:16.536 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:16.536 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:16.536 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.536 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.536 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.536 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:16.536 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.536 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.536 [ 00:18:16.536 { 00:18:16.536 "name": "BaseBdev2", 00:18:16.536 "aliases": [ 00:18:16.536 "a86ff69a-d248-44a3-8f8f-63c22f179e67" 00:18:16.536 ], 00:18:16.536 "product_name": "Malloc disk", 00:18:16.536 "block_size": 512, 00:18:16.536 "num_blocks": 65536, 00:18:16.536 "uuid": "a86ff69a-d248-44a3-8f8f-63c22f179e67", 00:18:16.536 "assigned_rate_limits": { 00:18:16.536 "rw_ios_per_sec": 0, 00:18:16.536 "rw_mbytes_per_sec": 0, 00:18:16.536 "r_mbytes_per_sec": 0, 00:18:16.536 "w_mbytes_per_sec": 0 00:18:16.536 }, 00:18:16.536 "claimed": true, 00:18:16.536 "claim_type": "exclusive_write", 00:18:16.536 "zoned": false, 00:18:16.536 "supported_io_types": { 00:18:16.536 "read": true, 00:18:16.536 "write": true, 00:18:16.536 "unmap": true, 00:18:16.536 "flush": true, 00:18:16.536 "reset": true, 00:18:16.536 "nvme_admin": false, 00:18:16.536 "nvme_io": false, 00:18:16.536 "nvme_io_md": false, 00:18:16.536 "write_zeroes": true, 00:18:16.536 "zcopy": true, 00:18:16.536 "get_zone_info": false, 00:18:16.536 "zone_management": false, 00:18:16.536 "zone_append": false, 00:18:16.536 "compare": false, 00:18:16.536 "compare_and_write": false, 00:18:16.536 "abort": true, 00:18:16.536 "seek_hole": false, 00:18:16.536 "seek_data": false, 00:18:16.536 "copy": true, 00:18:16.536 "nvme_iov_md": false 00:18:16.536 }, 00:18:16.536 "memory_domains": [ 00:18:16.536 { 00:18:16.536 "dma_device_id": "system", 00:18:16.536 "dma_device_type": 1 00:18:16.536 }, 00:18:16.537 { 00:18:16.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.537 "dma_device_type": 2 00:18:16.537 } 00:18:16.537 ], 00:18:16.537 "driver_specific": {} 00:18:16.537 } 00:18:16.537 ] 00:18:16.537 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.537 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:16.537 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:16.537 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:16.537 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:16.537 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:16.537 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:16.537 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:16.537 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:16.537 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:16.537 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.537 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.537 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.537 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.537 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.537 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.537 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.537 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.537 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.537 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.537 "name": "Existed_Raid", 00:18:16.537 "uuid": "3cdbe458-8c66-4ef0-881c-c4cd8174e118", 00:18:16.537 "strip_size_kb": 64, 00:18:16.537 "state": "configuring", 00:18:16.537 "raid_level": "raid5f", 00:18:16.537 "superblock": true, 00:18:16.537 "num_base_bdevs": 4, 00:18:16.537 "num_base_bdevs_discovered": 2, 00:18:16.537 "num_base_bdevs_operational": 4, 00:18:16.537 "base_bdevs_list": [ 00:18:16.537 { 00:18:16.537 "name": "BaseBdev1", 00:18:16.537 "uuid": "f9118186-8197-4d3d-a6d3-fb1a937c45b4", 00:18:16.537 "is_configured": true, 00:18:16.537 "data_offset": 2048, 00:18:16.537 "data_size": 63488 00:18:16.537 }, 00:18:16.537 { 00:18:16.537 "name": "BaseBdev2", 00:18:16.537 "uuid": "a86ff69a-d248-44a3-8f8f-63c22f179e67", 00:18:16.537 "is_configured": true, 00:18:16.537 "data_offset": 2048, 00:18:16.537 "data_size": 63488 00:18:16.537 }, 00:18:16.537 { 00:18:16.537 "name": "BaseBdev3", 00:18:16.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.537 "is_configured": false, 00:18:16.537 "data_offset": 0, 00:18:16.537 "data_size": 0 00:18:16.537 }, 00:18:16.537 { 00:18:16.537 "name": "BaseBdev4", 00:18:16.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.537 "is_configured": false, 00:18:16.537 "data_offset": 0, 00:18:16.537 "data_size": 0 00:18:16.537 } 00:18:16.537 ] 00:18:16.537 }' 00:18:16.537 04:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.537 04:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.105 04:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:17.105 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.105 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.105 [2024-11-27 04:35:13.434736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:17.105 BaseBdev3 00:18:17.105 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.105 04:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:17.105 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:17.105 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:17.105 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:17.105 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:17.105 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.106 [ 00:18:17.106 { 00:18:17.106 "name": "BaseBdev3", 00:18:17.106 "aliases": [ 00:18:17.106 "a743111a-cb42-4542-bb43-f6e2687e81f3" 00:18:17.106 ], 00:18:17.106 "product_name": "Malloc disk", 00:18:17.106 "block_size": 512, 00:18:17.106 "num_blocks": 65536, 00:18:17.106 "uuid": "a743111a-cb42-4542-bb43-f6e2687e81f3", 00:18:17.106 "assigned_rate_limits": { 00:18:17.106 "rw_ios_per_sec": 0, 00:18:17.106 "rw_mbytes_per_sec": 0, 00:18:17.106 "r_mbytes_per_sec": 0, 00:18:17.106 "w_mbytes_per_sec": 0 00:18:17.106 }, 00:18:17.106 "claimed": true, 00:18:17.106 "claim_type": "exclusive_write", 00:18:17.106 "zoned": false, 00:18:17.106 "supported_io_types": { 00:18:17.106 "read": true, 00:18:17.106 "write": true, 00:18:17.106 "unmap": true, 00:18:17.106 "flush": true, 00:18:17.106 "reset": true, 00:18:17.106 "nvme_admin": false, 00:18:17.106 "nvme_io": false, 00:18:17.106 "nvme_io_md": false, 00:18:17.106 "write_zeroes": true, 00:18:17.106 "zcopy": true, 00:18:17.106 "get_zone_info": false, 00:18:17.106 "zone_management": false, 00:18:17.106 "zone_append": false, 00:18:17.106 "compare": false, 00:18:17.106 "compare_and_write": false, 00:18:17.106 "abort": true, 00:18:17.106 "seek_hole": false, 00:18:17.106 "seek_data": false, 00:18:17.106 "copy": true, 00:18:17.106 "nvme_iov_md": false 00:18:17.106 }, 00:18:17.106 "memory_domains": [ 00:18:17.106 { 00:18:17.106 "dma_device_id": "system", 00:18:17.106 "dma_device_type": 1 00:18:17.106 }, 00:18:17.106 { 00:18:17.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.106 "dma_device_type": 2 00:18:17.106 } 00:18:17.106 ], 00:18:17.106 "driver_specific": {} 00:18:17.106 } 00:18:17.106 ] 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.106 "name": "Existed_Raid", 00:18:17.106 "uuid": "3cdbe458-8c66-4ef0-881c-c4cd8174e118", 00:18:17.106 "strip_size_kb": 64, 00:18:17.106 "state": "configuring", 00:18:17.106 "raid_level": "raid5f", 00:18:17.106 "superblock": true, 00:18:17.106 "num_base_bdevs": 4, 00:18:17.106 "num_base_bdevs_discovered": 3, 00:18:17.106 "num_base_bdevs_operational": 4, 00:18:17.106 "base_bdevs_list": [ 00:18:17.106 { 00:18:17.106 "name": "BaseBdev1", 00:18:17.106 "uuid": "f9118186-8197-4d3d-a6d3-fb1a937c45b4", 00:18:17.106 "is_configured": true, 00:18:17.106 "data_offset": 2048, 00:18:17.106 "data_size": 63488 00:18:17.106 }, 00:18:17.106 { 00:18:17.106 "name": "BaseBdev2", 00:18:17.106 "uuid": "a86ff69a-d248-44a3-8f8f-63c22f179e67", 00:18:17.106 "is_configured": true, 00:18:17.106 "data_offset": 2048, 00:18:17.106 "data_size": 63488 00:18:17.106 }, 00:18:17.106 { 00:18:17.106 "name": "BaseBdev3", 00:18:17.106 "uuid": "a743111a-cb42-4542-bb43-f6e2687e81f3", 00:18:17.106 "is_configured": true, 00:18:17.106 "data_offset": 2048, 00:18:17.106 "data_size": 63488 00:18:17.106 }, 00:18:17.106 { 00:18:17.106 "name": "BaseBdev4", 00:18:17.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.106 "is_configured": false, 00:18:17.106 "data_offset": 0, 00:18:17.106 "data_size": 0 00:18:17.106 } 00:18:17.106 ] 00:18:17.106 }' 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.106 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.675 04:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:17.675 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.675 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.675 [2024-11-27 04:35:13.991090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:17.675 [2024-11-27 04:35:13.991468] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:17.675 [2024-11-27 04:35:13.991492] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:17.675 BaseBdev4 00:18:17.675 [2024-11-27 04:35:13.991820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:17.675 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.675 04:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:17.675 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:17.675 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:17.675 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:17.675 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:17.675 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:17.675 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:17.675 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.675 04:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.675 [2024-11-27 04:35:13.999860] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:17.675 [2024-11-27 04:35:13.999932] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:17.675 [2024-11-27 04:35:14.000264] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.675 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.675 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:17.675 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.675 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.675 [ 00:18:17.675 { 00:18:17.675 "name": "BaseBdev4", 00:18:17.675 "aliases": [ 00:18:17.675 "549ad1ed-7c4a-4825-b8b0-fcf12d9e4589" 00:18:17.675 ], 00:18:17.675 "product_name": "Malloc disk", 00:18:17.675 "block_size": 512, 00:18:17.675 "num_blocks": 65536, 00:18:17.675 "uuid": "549ad1ed-7c4a-4825-b8b0-fcf12d9e4589", 00:18:17.675 "assigned_rate_limits": { 00:18:17.675 "rw_ios_per_sec": 0, 00:18:17.675 "rw_mbytes_per_sec": 0, 00:18:17.675 "r_mbytes_per_sec": 0, 00:18:17.675 "w_mbytes_per_sec": 0 00:18:17.675 }, 00:18:17.675 "claimed": true, 00:18:17.675 "claim_type": "exclusive_write", 00:18:17.675 "zoned": false, 00:18:17.675 "supported_io_types": { 00:18:17.675 "read": true, 00:18:17.675 "write": true, 00:18:17.675 "unmap": true, 00:18:17.675 "flush": true, 00:18:17.675 "reset": true, 00:18:17.675 "nvme_admin": false, 00:18:17.675 "nvme_io": false, 00:18:17.675 "nvme_io_md": false, 00:18:17.675 "write_zeroes": true, 00:18:17.675 "zcopy": true, 00:18:17.675 "get_zone_info": false, 00:18:17.675 "zone_management": false, 00:18:17.675 "zone_append": false, 00:18:17.675 "compare": false, 00:18:17.675 "compare_and_write": false, 00:18:17.675 "abort": true, 00:18:17.675 "seek_hole": false, 00:18:17.675 "seek_data": false, 00:18:17.675 "copy": true, 00:18:17.675 "nvme_iov_md": false 00:18:17.675 }, 00:18:17.675 "memory_domains": [ 00:18:17.675 { 00:18:17.675 "dma_device_id": "system", 00:18:17.675 "dma_device_type": 1 00:18:17.675 }, 00:18:17.675 { 00:18:17.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.675 "dma_device_type": 2 00:18:17.675 } 00:18:17.675 ], 00:18:17.675 "driver_specific": {} 00:18:17.675 } 00:18:17.675 ] 00:18:17.675 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.675 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:17.675 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:17.675 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:17.675 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:17.675 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.675 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.675 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:17.675 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:17.675 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:17.675 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.676 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.676 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.676 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.676 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.676 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.676 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.676 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.676 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.676 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.676 "name": "Existed_Raid", 00:18:17.676 "uuid": "3cdbe458-8c66-4ef0-881c-c4cd8174e118", 00:18:17.676 "strip_size_kb": 64, 00:18:17.676 "state": "online", 00:18:17.676 "raid_level": "raid5f", 00:18:17.676 "superblock": true, 00:18:17.676 "num_base_bdevs": 4, 00:18:17.676 "num_base_bdevs_discovered": 4, 00:18:17.676 "num_base_bdevs_operational": 4, 00:18:17.676 "base_bdevs_list": [ 00:18:17.676 { 00:18:17.676 "name": "BaseBdev1", 00:18:17.676 "uuid": "f9118186-8197-4d3d-a6d3-fb1a937c45b4", 00:18:17.676 "is_configured": true, 00:18:17.676 "data_offset": 2048, 00:18:17.676 "data_size": 63488 00:18:17.676 }, 00:18:17.676 { 00:18:17.676 "name": "BaseBdev2", 00:18:17.676 "uuid": "a86ff69a-d248-44a3-8f8f-63c22f179e67", 00:18:17.676 "is_configured": true, 00:18:17.676 "data_offset": 2048, 00:18:17.676 "data_size": 63488 00:18:17.676 }, 00:18:17.676 { 00:18:17.676 "name": "BaseBdev3", 00:18:17.676 "uuid": "a743111a-cb42-4542-bb43-f6e2687e81f3", 00:18:17.676 "is_configured": true, 00:18:17.676 "data_offset": 2048, 00:18:17.676 "data_size": 63488 00:18:17.676 }, 00:18:17.676 { 00:18:17.676 "name": "BaseBdev4", 00:18:17.676 "uuid": "549ad1ed-7c4a-4825-b8b0-fcf12d9e4589", 00:18:17.676 "is_configured": true, 00:18:17.676 "data_offset": 2048, 00:18:17.676 "data_size": 63488 00:18:17.676 } 00:18:17.676 ] 00:18:17.676 }' 00:18:17.676 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.676 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.935 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:17.935 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:17.935 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:17.935 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:17.935 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:17.935 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:17.935 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:17.935 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:17.935 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.935 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.935 [2024-11-27 04:35:14.508949] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:18.194 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.194 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:18.194 "name": "Existed_Raid", 00:18:18.194 "aliases": [ 00:18:18.194 "3cdbe458-8c66-4ef0-881c-c4cd8174e118" 00:18:18.194 ], 00:18:18.194 "product_name": "Raid Volume", 00:18:18.194 "block_size": 512, 00:18:18.194 "num_blocks": 190464, 00:18:18.194 "uuid": "3cdbe458-8c66-4ef0-881c-c4cd8174e118", 00:18:18.194 "assigned_rate_limits": { 00:18:18.194 "rw_ios_per_sec": 0, 00:18:18.194 "rw_mbytes_per_sec": 0, 00:18:18.194 "r_mbytes_per_sec": 0, 00:18:18.194 "w_mbytes_per_sec": 0 00:18:18.194 }, 00:18:18.194 "claimed": false, 00:18:18.194 "zoned": false, 00:18:18.194 "supported_io_types": { 00:18:18.194 "read": true, 00:18:18.194 "write": true, 00:18:18.194 "unmap": false, 00:18:18.194 "flush": false, 00:18:18.194 "reset": true, 00:18:18.194 "nvme_admin": false, 00:18:18.194 "nvme_io": false, 00:18:18.194 "nvme_io_md": false, 00:18:18.194 "write_zeroes": true, 00:18:18.194 "zcopy": false, 00:18:18.194 "get_zone_info": false, 00:18:18.194 "zone_management": false, 00:18:18.194 "zone_append": false, 00:18:18.194 "compare": false, 00:18:18.194 "compare_and_write": false, 00:18:18.194 "abort": false, 00:18:18.194 "seek_hole": false, 00:18:18.194 "seek_data": false, 00:18:18.194 "copy": false, 00:18:18.194 "nvme_iov_md": false 00:18:18.194 }, 00:18:18.194 "driver_specific": { 00:18:18.194 "raid": { 00:18:18.194 "uuid": "3cdbe458-8c66-4ef0-881c-c4cd8174e118", 00:18:18.194 "strip_size_kb": 64, 00:18:18.194 "state": "online", 00:18:18.194 "raid_level": "raid5f", 00:18:18.194 "superblock": true, 00:18:18.194 "num_base_bdevs": 4, 00:18:18.194 "num_base_bdevs_discovered": 4, 00:18:18.194 "num_base_bdevs_operational": 4, 00:18:18.194 "base_bdevs_list": [ 00:18:18.194 { 00:18:18.194 "name": "BaseBdev1", 00:18:18.194 "uuid": "f9118186-8197-4d3d-a6d3-fb1a937c45b4", 00:18:18.194 "is_configured": true, 00:18:18.194 "data_offset": 2048, 00:18:18.194 "data_size": 63488 00:18:18.194 }, 00:18:18.194 { 00:18:18.194 "name": "BaseBdev2", 00:18:18.194 "uuid": "a86ff69a-d248-44a3-8f8f-63c22f179e67", 00:18:18.194 "is_configured": true, 00:18:18.194 "data_offset": 2048, 00:18:18.194 "data_size": 63488 00:18:18.194 }, 00:18:18.194 { 00:18:18.194 "name": "BaseBdev3", 00:18:18.194 "uuid": "a743111a-cb42-4542-bb43-f6e2687e81f3", 00:18:18.194 "is_configured": true, 00:18:18.194 "data_offset": 2048, 00:18:18.194 "data_size": 63488 00:18:18.194 }, 00:18:18.194 { 00:18:18.194 "name": "BaseBdev4", 00:18:18.194 "uuid": "549ad1ed-7c4a-4825-b8b0-fcf12d9e4589", 00:18:18.194 "is_configured": true, 00:18:18.194 "data_offset": 2048, 00:18:18.194 "data_size": 63488 00:18:18.194 } 00:18:18.194 ] 00:18:18.194 } 00:18:18.194 } 00:18:18.194 }' 00:18:18.194 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:18.194 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:18.194 BaseBdev2 00:18:18.194 BaseBdev3 00:18:18.194 BaseBdev4' 00:18:18.194 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.194 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:18.194 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:18.194 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:18.194 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.194 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.194 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.194 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.194 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:18.194 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:18.194 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:18.195 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:18.195 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.195 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.195 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.195 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.195 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:18.195 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:18.195 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:18.195 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:18.195 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.195 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.195 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.195 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.533 [2024-11-27 04:35:14.860222] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.533 04:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.533 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.533 "name": "Existed_Raid", 00:18:18.533 "uuid": "3cdbe458-8c66-4ef0-881c-c4cd8174e118", 00:18:18.533 "strip_size_kb": 64, 00:18:18.533 "state": "online", 00:18:18.533 "raid_level": "raid5f", 00:18:18.533 "superblock": true, 00:18:18.533 "num_base_bdevs": 4, 00:18:18.533 "num_base_bdevs_discovered": 3, 00:18:18.533 "num_base_bdevs_operational": 3, 00:18:18.533 "base_bdevs_list": [ 00:18:18.533 { 00:18:18.533 "name": null, 00:18:18.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.533 "is_configured": false, 00:18:18.533 "data_offset": 0, 00:18:18.533 "data_size": 63488 00:18:18.533 }, 00:18:18.533 { 00:18:18.533 "name": "BaseBdev2", 00:18:18.533 "uuid": "a86ff69a-d248-44a3-8f8f-63c22f179e67", 00:18:18.533 "is_configured": true, 00:18:18.533 "data_offset": 2048, 00:18:18.533 "data_size": 63488 00:18:18.533 }, 00:18:18.533 { 00:18:18.533 "name": "BaseBdev3", 00:18:18.533 "uuid": "a743111a-cb42-4542-bb43-f6e2687e81f3", 00:18:18.533 "is_configured": true, 00:18:18.533 "data_offset": 2048, 00:18:18.533 "data_size": 63488 00:18:18.533 }, 00:18:18.533 { 00:18:18.533 "name": "BaseBdev4", 00:18:18.533 "uuid": "549ad1ed-7c4a-4825-b8b0-fcf12d9e4589", 00:18:18.533 "is_configured": true, 00:18:18.533 "data_offset": 2048, 00:18:18.533 "data_size": 63488 00:18:18.533 } 00:18:18.533 ] 00:18:18.533 }' 00:18:18.533 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.533 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.100 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:19.100 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:19.100 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:19.100 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.100 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.100 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.100 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.100 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:19.100 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:19.100 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:19.100 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.100 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.100 [2024-11-27 04:35:15.461978] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:19.100 [2024-11-27 04:35:15.462155] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:19.100 [2024-11-27 04:35:15.559791] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:19.100 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.100 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:19.100 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:19.100 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.100 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.100 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.100 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:19.100 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.100 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:19.100 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:19.100 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:19.100 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.100 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.100 [2024-11-27 04:35:15.623729] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:19.358 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.358 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:19.358 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:19.358 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.358 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:19.358 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.358 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.358 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.358 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:19.358 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:19.358 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:19.358 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.358 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.358 [2024-11-27 04:35:15.787242] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:19.358 [2024-11-27 04:35:15.787349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:19.358 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.358 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:19.358 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:19.358 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.358 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.358 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.358 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:19.358 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.617 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:19.617 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:19.617 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:19.617 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:19.617 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:19.617 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:19.617 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.617 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.617 BaseBdev2 00:18:19.617 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.617 04:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:19.617 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:19.617 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:19.617 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:19.617 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:19.617 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:19.617 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:19.617 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.617 04:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.617 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.617 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:19.617 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.617 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.617 [ 00:18:19.617 { 00:18:19.617 "name": "BaseBdev2", 00:18:19.617 "aliases": [ 00:18:19.617 "f3e568a3-3c4e-4e4a-a415-9772c5a9d2ea" 00:18:19.617 ], 00:18:19.617 "product_name": "Malloc disk", 00:18:19.617 "block_size": 512, 00:18:19.617 "num_blocks": 65536, 00:18:19.617 "uuid": "f3e568a3-3c4e-4e4a-a415-9772c5a9d2ea", 00:18:19.617 "assigned_rate_limits": { 00:18:19.617 "rw_ios_per_sec": 0, 00:18:19.617 "rw_mbytes_per_sec": 0, 00:18:19.617 "r_mbytes_per_sec": 0, 00:18:19.617 "w_mbytes_per_sec": 0 00:18:19.617 }, 00:18:19.618 "claimed": false, 00:18:19.618 "zoned": false, 00:18:19.618 "supported_io_types": { 00:18:19.618 "read": true, 00:18:19.618 "write": true, 00:18:19.618 "unmap": true, 00:18:19.618 "flush": true, 00:18:19.618 "reset": true, 00:18:19.618 "nvme_admin": false, 00:18:19.618 "nvme_io": false, 00:18:19.618 "nvme_io_md": false, 00:18:19.618 "write_zeroes": true, 00:18:19.618 "zcopy": true, 00:18:19.618 "get_zone_info": false, 00:18:19.618 "zone_management": false, 00:18:19.618 "zone_append": false, 00:18:19.618 "compare": false, 00:18:19.618 "compare_and_write": false, 00:18:19.618 "abort": true, 00:18:19.618 "seek_hole": false, 00:18:19.618 "seek_data": false, 00:18:19.618 "copy": true, 00:18:19.618 "nvme_iov_md": false 00:18:19.618 }, 00:18:19.618 "memory_domains": [ 00:18:19.618 { 00:18:19.618 "dma_device_id": "system", 00:18:19.618 "dma_device_type": 1 00:18:19.618 }, 00:18:19.618 { 00:18:19.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.618 "dma_device_type": 2 00:18:19.618 } 00:18:19.618 ], 00:18:19.618 "driver_specific": {} 00:18:19.618 } 00:18:19.618 ] 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.618 BaseBdev3 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.618 [ 00:18:19.618 { 00:18:19.618 "name": "BaseBdev3", 00:18:19.618 "aliases": [ 00:18:19.618 "c6fbd681-121d-4df9-aae6-82774b4097fc" 00:18:19.618 ], 00:18:19.618 "product_name": "Malloc disk", 00:18:19.618 "block_size": 512, 00:18:19.618 "num_blocks": 65536, 00:18:19.618 "uuid": "c6fbd681-121d-4df9-aae6-82774b4097fc", 00:18:19.618 "assigned_rate_limits": { 00:18:19.618 "rw_ios_per_sec": 0, 00:18:19.618 "rw_mbytes_per_sec": 0, 00:18:19.618 "r_mbytes_per_sec": 0, 00:18:19.618 "w_mbytes_per_sec": 0 00:18:19.618 }, 00:18:19.618 "claimed": false, 00:18:19.618 "zoned": false, 00:18:19.618 "supported_io_types": { 00:18:19.618 "read": true, 00:18:19.618 "write": true, 00:18:19.618 "unmap": true, 00:18:19.618 "flush": true, 00:18:19.618 "reset": true, 00:18:19.618 "nvme_admin": false, 00:18:19.618 "nvme_io": false, 00:18:19.618 "nvme_io_md": false, 00:18:19.618 "write_zeroes": true, 00:18:19.618 "zcopy": true, 00:18:19.618 "get_zone_info": false, 00:18:19.618 "zone_management": false, 00:18:19.618 "zone_append": false, 00:18:19.618 "compare": false, 00:18:19.618 "compare_and_write": false, 00:18:19.618 "abort": true, 00:18:19.618 "seek_hole": false, 00:18:19.618 "seek_data": false, 00:18:19.618 "copy": true, 00:18:19.618 "nvme_iov_md": false 00:18:19.618 }, 00:18:19.618 "memory_domains": [ 00:18:19.618 { 00:18:19.618 "dma_device_id": "system", 00:18:19.618 "dma_device_type": 1 00:18:19.618 }, 00:18:19.618 { 00:18:19.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.618 "dma_device_type": 2 00:18:19.618 } 00:18:19.618 ], 00:18:19.618 "driver_specific": {} 00:18:19.618 } 00:18:19.618 ] 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.618 BaseBdev4 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.618 [ 00:18:19.618 { 00:18:19.618 "name": "BaseBdev4", 00:18:19.618 "aliases": [ 00:18:19.618 "bb6fff10-6362-49e2-9b3d-4ebe4d5c132e" 00:18:19.618 ], 00:18:19.618 "product_name": "Malloc disk", 00:18:19.618 "block_size": 512, 00:18:19.618 "num_blocks": 65536, 00:18:19.618 "uuid": "bb6fff10-6362-49e2-9b3d-4ebe4d5c132e", 00:18:19.618 "assigned_rate_limits": { 00:18:19.618 "rw_ios_per_sec": 0, 00:18:19.618 "rw_mbytes_per_sec": 0, 00:18:19.618 "r_mbytes_per_sec": 0, 00:18:19.618 "w_mbytes_per_sec": 0 00:18:19.618 }, 00:18:19.618 "claimed": false, 00:18:19.618 "zoned": false, 00:18:19.618 "supported_io_types": { 00:18:19.618 "read": true, 00:18:19.618 "write": true, 00:18:19.618 "unmap": true, 00:18:19.618 "flush": true, 00:18:19.618 "reset": true, 00:18:19.618 "nvme_admin": false, 00:18:19.618 "nvme_io": false, 00:18:19.618 "nvme_io_md": false, 00:18:19.618 "write_zeroes": true, 00:18:19.618 "zcopy": true, 00:18:19.618 "get_zone_info": false, 00:18:19.618 "zone_management": false, 00:18:19.618 "zone_append": false, 00:18:19.618 "compare": false, 00:18:19.618 "compare_and_write": false, 00:18:19.618 "abort": true, 00:18:19.618 "seek_hole": false, 00:18:19.618 "seek_data": false, 00:18:19.618 "copy": true, 00:18:19.618 "nvme_iov_md": false 00:18:19.618 }, 00:18:19.618 "memory_domains": [ 00:18:19.618 { 00:18:19.618 "dma_device_id": "system", 00:18:19.618 "dma_device_type": 1 00:18:19.618 }, 00:18:19.618 { 00:18:19.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.618 "dma_device_type": 2 00:18:19.618 } 00:18:19.618 ], 00:18:19.618 "driver_specific": {} 00:18:19.618 } 00:18:19.618 ] 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.618 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.618 [2024-11-27 04:35:16.192525] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:19.618 [2024-11-27 04:35:16.192626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:19.619 [2024-11-27 04:35:16.192679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:19.619 [2024-11-27 04:35:16.194688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:19.619 [2024-11-27 04:35:16.194809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:19.619 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.619 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:19.619 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:19.619 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:19.619 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:19.619 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:19.619 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:19.619 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.619 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.619 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.619 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.877 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.877 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.877 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.877 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.877 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.877 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.877 "name": "Existed_Raid", 00:18:19.877 "uuid": "08dad7c5-bc7f-446f-98e1-478977e67be6", 00:18:19.877 "strip_size_kb": 64, 00:18:19.877 "state": "configuring", 00:18:19.877 "raid_level": "raid5f", 00:18:19.877 "superblock": true, 00:18:19.877 "num_base_bdevs": 4, 00:18:19.877 "num_base_bdevs_discovered": 3, 00:18:19.877 "num_base_bdevs_operational": 4, 00:18:19.877 "base_bdevs_list": [ 00:18:19.877 { 00:18:19.877 "name": "BaseBdev1", 00:18:19.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.877 "is_configured": false, 00:18:19.877 "data_offset": 0, 00:18:19.877 "data_size": 0 00:18:19.877 }, 00:18:19.877 { 00:18:19.877 "name": "BaseBdev2", 00:18:19.877 "uuid": "f3e568a3-3c4e-4e4a-a415-9772c5a9d2ea", 00:18:19.877 "is_configured": true, 00:18:19.877 "data_offset": 2048, 00:18:19.877 "data_size": 63488 00:18:19.877 }, 00:18:19.877 { 00:18:19.877 "name": "BaseBdev3", 00:18:19.877 "uuid": "c6fbd681-121d-4df9-aae6-82774b4097fc", 00:18:19.877 "is_configured": true, 00:18:19.877 "data_offset": 2048, 00:18:19.877 "data_size": 63488 00:18:19.877 }, 00:18:19.877 { 00:18:19.877 "name": "BaseBdev4", 00:18:19.877 "uuid": "bb6fff10-6362-49e2-9b3d-4ebe4d5c132e", 00:18:19.877 "is_configured": true, 00:18:19.877 "data_offset": 2048, 00:18:19.877 "data_size": 63488 00:18:19.877 } 00:18:19.877 ] 00:18:19.877 }' 00:18:19.877 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.877 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.137 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:20.137 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.137 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.137 [2024-11-27 04:35:16.635790] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:20.137 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.137 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:20.137 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:20.137 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:20.137 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:20.137 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:20.137 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:20.137 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.137 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.137 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.137 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.137 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.137 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.137 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.137 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.137 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.137 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.137 "name": "Existed_Raid", 00:18:20.137 "uuid": "08dad7c5-bc7f-446f-98e1-478977e67be6", 00:18:20.137 "strip_size_kb": 64, 00:18:20.137 "state": "configuring", 00:18:20.137 "raid_level": "raid5f", 00:18:20.137 "superblock": true, 00:18:20.137 "num_base_bdevs": 4, 00:18:20.137 "num_base_bdevs_discovered": 2, 00:18:20.137 "num_base_bdevs_operational": 4, 00:18:20.137 "base_bdevs_list": [ 00:18:20.137 { 00:18:20.137 "name": "BaseBdev1", 00:18:20.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.137 "is_configured": false, 00:18:20.137 "data_offset": 0, 00:18:20.137 "data_size": 0 00:18:20.137 }, 00:18:20.137 { 00:18:20.137 "name": null, 00:18:20.137 "uuid": "f3e568a3-3c4e-4e4a-a415-9772c5a9d2ea", 00:18:20.137 "is_configured": false, 00:18:20.137 "data_offset": 0, 00:18:20.137 "data_size": 63488 00:18:20.137 }, 00:18:20.137 { 00:18:20.137 "name": "BaseBdev3", 00:18:20.137 "uuid": "c6fbd681-121d-4df9-aae6-82774b4097fc", 00:18:20.137 "is_configured": true, 00:18:20.137 "data_offset": 2048, 00:18:20.137 "data_size": 63488 00:18:20.137 }, 00:18:20.137 { 00:18:20.137 "name": "BaseBdev4", 00:18:20.137 "uuid": "bb6fff10-6362-49e2-9b3d-4ebe4d5c132e", 00:18:20.137 "is_configured": true, 00:18:20.137 "data_offset": 2048, 00:18:20.137 "data_size": 63488 00:18:20.137 } 00:18:20.137 ] 00:18:20.137 }' 00:18:20.137 04:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.137 04:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.705 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.705 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.706 [2024-11-27 04:35:17.187021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:20.706 BaseBdev1 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.706 [ 00:18:20.706 { 00:18:20.706 "name": "BaseBdev1", 00:18:20.706 "aliases": [ 00:18:20.706 "132f8941-686f-4bbe-9d5a-3c9d085ead2e" 00:18:20.706 ], 00:18:20.706 "product_name": "Malloc disk", 00:18:20.706 "block_size": 512, 00:18:20.706 "num_blocks": 65536, 00:18:20.706 "uuid": "132f8941-686f-4bbe-9d5a-3c9d085ead2e", 00:18:20.706 "assigned_rate_limits": { 00:18:20.706 "rw_ios_per_sec": 0, 00:18:20.706 "rw_mbytes_per_sec": 0, 00:18:20.706 "r_mbytes_per_sec": 0, 00:18:20.706 "w_mbytes_per_sec": 0 00:18:20.706 }, 00:18:20.706 "claimed": true, 00:18:20.706 "claim_type": "exclusive_write", 00:18:20.706 "zoned": false, 00:18:20.706 "supported_io_types": { 00:18:20.706 "read": true, 00:18:20.706 "write": true, 00:18:20.706 "unmap": true, 00:18:20.706 "flush": true, 00:18:20.706 "reset": true, 00:18:20.706 "nvme_admin": false, 00:18:20.706 "nvme_io": false, 00:18:20.706 "nvme_io_md": false, 00:18:20.706 "write_zeroes": true, 00:18:20.706 "zcopy": true, 00:18:20.706 "get_zone_info": false, 00:18:20.706 "zone_management": false, 00:18:20.706 "zone_append": false, 00:18:20.706 "compare": false, 00:18:20.706 "compare_and_write": false, 00:18:20.706 "abort": true, 00:18:20.706 "seek_hole": false, 00:18:20.706 "seek_data": false, 00:18:20.706 "copy": true, 00:18:20.706 "nvme_iov_md": false 00:18:20.706 }, 00:18:20.706 "memory_domains": [ 00:18:20.706 { 00:18:20.706 "dma_device_id": "system", 00:18:20.706 "dma_device_type": 1 00:18:20.706 }, 00:18:20.706 { 00:18:20.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.706 "dma_device_type": 2 00:18:20.706 } 00:18:20.706 ], 00:18:20.706 "driver_specific": {} 00:18:20.706 } 00:18:20.706 ] 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.706 "name": "Existed_Raid", 00:18:20.706 "uuid": "08dad7c5-bc7f-446f-98e1-478977e67be6", 00:18:20.706 "strip_size_kb": 64, 00:18:20.706 "state": "configuring", 00:18:20.706 "raid_level": "raid5f", 00:18:20.706 "superblock": true, 00:18:20.706 "num_base_bdevs": 4, 00:18:20.706 "num_base_bdevs_discovered": 3, 00:18:20.706 "num_base_bdevs_operational": 4, 00:18:20.706 "base_bdevs_list": [ 00:18:20.706 { 00:18:20.706 "name": "BaseBdev1", 00:18:20.706 "uuid": "132f8941-686f-4bbe-9d5a-3c9d085ead2e", 00:18:20.706 "is_configured": true, 00:18:20.706 "data_offset": 2048, 00:18:20.706 "data_size": 63488 00:18:20.706 }, 00:18:20.706 { 00:18:20.706 "name": null, 00:18:20.706 "uuid": "f3e568a3-3c4e-4e4a-a415-9772c5a9d2ea", 00:18:20.706 "is_configured": false, 00:18:20.706 "data_offset": 0, 00:18:20.706 "data_size": 63488 00:18:20.706 }, 00:18:20.706 { 00:18:20.706 "name": "BaseBdev3", 00:18:20.706 "uuid": "c6fbd681-121d-4df9-aae6-82774b4097fc", 00:18:20.706 "is_configured": true, 00:18:20.706 "data_offset": 2048, 00:18:20.706 "data_size": 63488 00:18:20.706 }, 00:18:20.706 { 00:18:20.706 "name": "BaseBdev4", 00:18:20.706 "uuid": "bb6fff10-6362-49e2-9b3d-4ebe4d5c132e", 00:18:20.706 "is_configured": true, 00:18:20.706 "data_offset": 2048, 00:18:20.706 "data_size": 63488 00:18:20.706 } 00:18:20.706 ] 00:18:20.706 }' 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.706 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.278 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.278 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.278 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.278 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:21.278 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.278 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:21.278 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:21.278 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.278 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.278 [2024-11-27 04:35:17.702285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:21.278 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.278 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:21.278 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:21.278 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:21.278 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:21.278 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:21.278 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:21.278 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.278 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.278 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.278 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.278 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.278 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.278 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.278 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.278 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.278 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.278 "name": "Existed_Raid", 00:18:21.279 "uuid": "08dad7c5-bc7f-446f-98e1-478977e67be6", 00:18:21.279 "strip_size_kb": 64, 00:18:21.279 "state": "configuring", 00:18:21.279 "raid_level": "raid5f", 00:18:21.279 "superblock": true, 00:18:21.279 "num_base_bdevs": 4, 00:18:21.279 "num_base_bdevs_discovered": 2, 00:18:21.279 "num_base_bdevs_operational": 4, 00:18:21.279 "base_bdevs_list": [ 00:18:21.279 { 00:18:21.279 "name": "BaseBdev1", 00:18:21.279 "uuid": "132f8941-686f-4bbe-9d5a-3c9d085ead2e", 00:18:21.279 "is_configured": true, 00:18:21.279 "data_offset": 2048, 00:18:21.279 "data_size": 63488 00:18:21.279 }, 00:18:21.279 { 00:18:21.279 "name": null, 00:18:21.279 "uuid": "f3e568a3-3c4e-4e4a-a415-9772c5a9d2ea", 00:18:21.279 "is_configured": false, 00:18:21.279 "data_offset": 0, 00:18:21.279 "data_size": 63488 00:18:21.279 }, 00:18:21.279 { 00:18:21.279 "name": null, 00:18:21.279 "uuid": "c6fbd681-121d-4df9-aae6-82774b4097fc", 00:18:21.279 "is_configured": false, 00:18:21.279 "data_offset": 0, 00:18:21.279 "data_size": 63488 00:18:21.279 }, 00:18:21.279 { 00:18:21.279 "name": "BaseBdev4", 00:18:21.279 "uuid": "bb6fff10-6362-49e2-9b3d-4ebe4d5c132e", 00:18:21.279 "is_configured": true, 00:18:21.279 "data_offset": 2048, 00:18:21.279 "data_size": 63488 00:18:21.279 } 00:18:21.279 ] 00:18:21.279 }' 00:18:21.279 04:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.279 04:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.848 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.848 04:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.848 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:21.848 04:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.848 04:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.848 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:21.848 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:21.848 04:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.848 04:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.848 [2024-11-27 04:35:18.221394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:21.848 04:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.848 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:21.848 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:21.848 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:21.848 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:21.848 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:21.848 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:21.848 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.848 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.848 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.849 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.849 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.849 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.849 04:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.849 04:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.849 04:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.849 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.849 "name": "Existed_Raid", 00:18:21.849 "uuid": "08dad7c5-bc7f-446f-98e1-478977e67be6", 00:18:21.849 "strip_size_kb": 64, 00:18:21.849 "state": "configuring", 00:18:21.849 "raid_level": "raid5f", 00:18:21.849 "superblock": true, 00:18:21.849 "num_base_bdevs": 4, 00:18:21.849 "num_base_bdevs_discovered": 3, 00:18:21.849 "num_base_bdevs_operational": 4, 00:18:21.849 "base_bdevs_list": [ 00:18:21.849 { 00:18:21.849 "name": "BaseBdev1", 00:18:21.849 "uuid": "132f8941-686f-4bbe-9d5a-3c9d085ead2e", 00:18:21.849 "is_configured": true, 00:18:21.849 "data_offset": 2048, 00:18:21.849 "data_size": 63488 00:18:21.849 }, 00:18:21.849 { 00:18:21.849 "name": null, 00:18:21.849 "uuid": "f3e568a3-3c4e-4e4a-a415-9772c5a9d2ea", 00:18:21.849 "is_configured": false, 00:18:21.849 "data_offset": 0, 00:18:21.849 "data_size": 63488 00:18:21.849 }, 00:18:21.849 { 00:18:21.849 "name": "BaseBdev3", 00:18:21.849 "uuid": "c6fbd681-121d-4df9-aae6-82774b4097fc", 00:18:21.849 "is_configured": true, 00:18:21.849 "data_offset": 2048, 00:18:21.849 "data_size": 63488 00:18:21.849 }, 00:18:21.849 { 00:18:21.849 "name": "BaseBdev4", 00:18:21.849 "uuid": "bb6fff10-6362-49e2-9b3d-4ebe4d5c132e", 00:18:21.849 "is_configured": true, 00:18:21.849 "data_offset": 2048, 00:18:21.849 "data_size": 63488 00:18:21.849 } 00:18:21.849 ] 00:18:21.849 }' 00:18:21.849 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.849 04:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.107 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.107 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:22.107 04:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.107 04:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.369 04:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.369 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:22.369 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:22.370 04:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.370 04:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.370 [2024-11-27 04:35:18.724561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:22.370 04:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.370 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:22.370 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:22.370 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:22.370 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:22.370 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.370 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:22.370 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.370 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.370 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.370 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.370 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.370 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.370 04:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.370 04:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.370 04:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.370 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.370 "name": "Existed_Raid", 00:18:22.370 "uuid": "08dad7c5-bc7f-446f-98e1-478977e67be6", 00:18:22.370 "strip_size_kb": 64, 00:18:22.370 "state": "configuring", 00:18:22.370 "raid_level": "raid5f", 00:18:22.370 "superblock": true, 00:18:22.370 "num_base_bdevs": 4, 00:18:22.370 "num_base_bdevs_discovered": 2, 00:18:22.370 "num_base_bdevs_operational": 4, 00:18:22.370 "base_bdevs_list": [ 00:18:22.370 { 00:18:22.370 "name": null, 00:18:22.370 "uuid": "132f8941-686f-4bbe-9d5a-3c9d085ead2e", 00:18:22.370 "is_configured": false, 00:18:22.370 "data_offset": 0, 00:18:22.370 "data_size": 63488 00:18:22.370 }, 00:18:22.370 { 00:18:22.370 "name": null, 00:18:22.370 "uuid": "f3e568a3-3c4e-4e4a-a415-9772c5a9d2ea", 00:18:22.370 "is_configured": false, 00:18:22.370 "data_offset": 0, 00:18:22.370 "data_size": 63488 00:18:22.370 }, 00:18:22.370 { 00:18:22.370 "name": "BaseBdev3", 00:18:22.370 "uuid": "c6fbd681-121d-4df9-aae6-82774b4097fc", 00:18:22.370 "is_configured": true, 00:18:22.370 "data_offset": 2048, 00:18:22.370 "data_size": 63488 00:18:22.370 }, 00:18:22.370 { 00:18:22.370 "name": "BaseBdev4", 00:18:22.370 "uuid": "bb6fff10-6362-49e2-9b3d-4ebe4d5c132e", 00:18:22.370 "is_configured": true, 00:18:22.370 "data_offset": 2048, 00:18:22.370 "data_size": 63488 00:18:22.370 } 00:18:22.370 ] 00:18:22.370 }' 00:18:22.370 04:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.370 04:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.940 [2024-11-27 04:35:19.348073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.940 "name": "Existed_Raid", 00:18:22.940 "uuid": "08dad7c5-bc7f-446f-98e1-478977e67be6", 00:18:22.940 "strip_size_kb": 64, 00:18:22.940 "state": "configuring", 00:18:22.940 "raid_level": "raid5f", 00:18:22.940 "superblock": true, 00:18:22.940 "num_base_bdevs": 4, 00:18:22.940 "num_base_bdevs_discovered": 3, 00:18:22.940 "num_base_bdevs_operational": 4, 00:18:22.940 "base_bdevs_list": [ 00:18:22.940 { 00:18:22.940 "name": null, 00:18:22.940 "uuid": "132f8941-686f-4bbe-9d5a-3c9d085ead2e", 00:18:22.940 "is_configured": false, 00:18:22.940 "data_offset": 0, 00:18:22.940 "data_size": 63488 00:18:22.940 }, 00:18:22.940 { 00:18:22.940 "name": "BaseBdev2", 00:18:22.940 "uuid": "f3e568a3-3c4e-4e4a-a415-9772c5a9d2ea", 00:18:22.940 "is_configured": true, 00:18:22.940 "data_offset": 2048, 00:18:22.940 "data_size": 63488 00:18:22.940 }, 00:18:22.940 { 00:18:22.940 "name": "BaseBdev3", 00:18:22.940 "uuid": "c6fbd681-121d-4df9-aae6-82774b4097fc", 00:18:22.940 "is_configured": true, 00:18:22.940 "data_offset": 2048, 00:18:22.940 "data_size": 63488 00:18:22.940 }, 00:18:22.940 { 00:18:22.940 "name": "BaseBdev4", 00:18:22.940 "uuid": "bb6fff10-6362-49e2-9b3d-4ebe4d5c132e", 00:18:22.940 "is_configured": true, 00:18:22.940 "data_offset": 2048, 00:18:22.940 "data_size": 63488 00:18:22.940 } 00:18:22.940 ] 00:18:22.940 }' 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.940 04:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.510 04:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.510 04:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:23.510 04:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.510 04:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.510 04:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.510 04:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:23.510 04:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.510 04:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.510 04:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.510 04:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:23.510 04:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.510 04:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 132f8941-686f-4bbe-9d5a-3c9d085ead2e 00:18:23.510 04:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.510 04:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.510 [2024-11-27 04:35:20.031724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:23.510 [2024-11-27 04:35:20.032031] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:23.510 [2024-11-27 04:35:20.032050] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:23.510 [2024-11-27 04:35:20.032362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:23.510 NewBaseBdev 00:18:23.510 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.510 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:23.510 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:18:23.510 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:23.510 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:23.510 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:23.510 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:23.510 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:23.510 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.510 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.510 [2024-11-27 04:35:20.040865] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:23.510 [2024-11-27 04:35:20.040911] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:23.510 [2024-11-27 04:35:20.041198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.510 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.510 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:23.510 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.510 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.510 [ 00:18:23.510 { 00:18:23.510 "name": "NewBaseBdev", 00:18:23.510 "aliases": [ 00:18:23.510 "132f8941-686f-4bbe-9d5a-3c9d085ead2e" 00:18:23.510 ], 00:18:23.510 "product_name": "Malloc disk", 00:18:23.510 "block_size": 512, 00:18:23.510 "num_blocks": 65536, 00:18:23.510 "uuid": "132f8941-686f-4bbe-9d5a-3c9d085ead2e", 00:18:23.510 "assigned_rate_limits": { 00:18:23.510 "rw_ios_per_sec": 0, 00:18:23.510 "rw_mbytes_per_sec": 0, 00:18:23.510 "r_mbytes_per_sec": 0, 00:18:23.510 "w_mbytes_per_sec": 0 00:18:23.510 }, 00:18:23.510 "claimed": true, 00:18:23.510 "claim_type": "exclusive_write", 00:18:23.510 "zoned": false, 00:18:23.510 "supported_io_types": { 00:18:23.510 "read": true, 00:18:23.510 "write": true, 00:18:23.510 "unmap": true, 00:18:23.510 "flush": true, 00:18:23.510 "reset": true, 00:18:23.510 "nvme_admin": false, 00:18:23.510 "nvme_io": false, 00:18:23.510 "nvme_io_md": false, 00:18:23.510 "write_zeroes": true, 00:18:23.510 "zcopy": true, 00:18:23.510 "get_zone_info": false, 00:18:23.510 "zone_management": false, 00:18:23.510 "zone_append": false, 00:18:23.510 "compare": false, 00:18:23.510 "compare_and_write": false, 00:18:23.510 "abort": true, 00:18:23.510 "seek_hole": false, 00:18:23.510 "seek_data": false, 00:18:23.510 "copy": true, 00:18:23.510 "nvme_iov_md": false 00:18:23.510 }, 00:18:23.510 "memory_domains": [ 00:18:23.510 { 00:18:23.510 "dma_device_id": "system", 00:18:23.510 "dma_device_type": 1 00:18:23.510 }, 00:18:23.510 { 00:18:23.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.510 "dma_device_type": 2 00:18:23.510 } 00:18:23.510 ], 00:18:23.510 "driver_specific": {} 00:18:23.510 } 00:18:23.510 ] 00:18:23.510 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.510 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:23.510 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:23.510 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:23.510 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.510 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:23.510 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:23.510 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:23.510 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.510 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.510 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.511 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.511 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.511 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.511 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.511 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.770 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.770 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.770 "name": "Existed_Raid", 00:18:23.770 "uuid": "08dad7c5-bc7f-446f-98e1-478977e67be6", 00:18:23.770 "strip_size_kb": 64, 00:18:23.770 "state": "online", 00:18:23.770 "raid_level": "raid5f", 00:18:23.770 "superblock": true, 00:18:23.770 "num_base_bdevs": 4, 00:18:23.770 "num_base_bdevs_discovered": 4, 00:18:23.770 "num_base_bdevs_operational": 4, 00:18:23.770 "base_bdevs_list": [ 00:18:23.770 { 00:18:23.770 "name": "NewBaseBdev", 00:18:23.770 "uuid": "132f8941-686f-4bbe-9d5a-3c9d085ead2e", 00:18:23.770 "is_configured": true, 00:18:23.770 "data_offset": 2048, 00:18:23.770 "data_size": 63488 00:18:23.770 }, 00:18:23.770 { 00:18:23.770 "name": "BaseBdev2", 00:18:23.770 "uuid": "f3e568a3-3c4e-4e4a-a415-9772c5a9d2ea", 00:18:23.770 "is_configured": true, 00:18:23.770 "data_offset": 2048, 00:18:23.770 "data_size": 63488 00:18:23.770 }, 00:18:23.770 { 00:18:23.770 "name": "BaseBdev3", 00:18:23.770 "uuid": "c6fbd681-121d-4df9-aae6-82774b4097fc", 00:18:23.770 "is_configured": true, 00:18:23.770 "data_offset": 2048, 00:18:23.770 "data_size": 63488 00:18:23.770 }, 00:18:23.770 { 00:18:23.770 "name": "BaseBdev4", 00:18:23.770 "uuid": "bb6fff10-6362-49e2-9b3d-4ebe4d5c132e", 00:18:23.770 "is_configured": true, 00:18:23.770 "data_offset": 2048, 00:18:23.770 "data_size": 63488 00:18:23.770 } 00:18:23.770 ] 00:18:23.770 }' 00:18:23.770 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.770 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.029 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:24.029 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:24.029 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:24.029 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:24.029 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:24.029 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:24.029 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:24.029 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:24.029 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.029 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.029 [2024-11-27 04:35:20.502358] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:24.029 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.029 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:24.029 "name": "Existed_Raid", 00:18:24.029 "aliases": [ 00:18:24.029 "08dad7c5-bc7f-446f-98e1-478977e67be6" 00:18:24.029 ], 00:18:24.029 "product_name": "Raid Volume", 00:18:24.029 "block_size": 512, 00:18:24.029 "num_blocks": 190464, 00:18:24.029 "uuid": "08dad7c5-bc7f-446f-98e1-478977e67be6", 00:18:24.029 "assigned_rate_limits": { 00:18:24.029 "rw_ios_per_sec": 0, 00:18:24.029 "rw_mbytes_per_sec": 0, 00:18:24.029 "r_mbytes_per_sec": 0, 00:18:24.029 "w_mbytes_per_sec": 0 00:18:24.029 }, 00:18:24.029 "claimed": false, 00:18:24.029 "zoned": false, 00:18:24.029 "supported_io_types": { 00:18:24.029 "read": true, 00:18:24.029 "write": true, 00:18:24.029 "unmap": false, 00:18:24.029 "flush": false, 00:18:24.029 "reset": true, 00:18:24.029 "nvme_admin": false, 00:18:24.029 "nvme_io": false, 00:18:24.029 "nvme_io_md": false, 00:18:24.029 "write_zeroes": true, 00:18:24.029 "zcopy": false, 00:18:24.029 "get_zone_info": false, 00:18:24.029 "zone_management": false, 00:18:24.029 "zone_append": false, 00:18:24.029 "compare": false, 00:18:24.030 "compare_and_write": false, 00:18:24.030 "abort": false, 00:18:24.030 "seek_hole": false, 00:18:24.030 "seek_data": false, 00:18:24.030 "copy": false, 00:18:24.030 "nvme_iov_md": false 00:18:24.030 }, 00:18:24.030 "driver_specific": { 00:18:24.030 "raid": { 00:18:24.030 "uuid": "08dad7c5-bc7f-446f-98e1-478977e67be6", 00:18:24.030 "strip_size_kb": 64, 00:18:24.030 "state": "online", 00:18:24.030 "raid_level": "raid5f", 00:18:24.030 "superblock": true, 00:18:24.030 "num_base_bdevs": 4, 00:18:24.030 "num_base_bdevs_discovered": 4, 00:18:24.030 "num_base_bdevs_operational": 4, 00:18:24.030 "base_bdevs_list": [ 00:18:24.030 { 00:18:24.030 "name": "NewBaseBdev", 00:18:24.030 "uuid": "132f8941-686f-4bbe-9d5a-3c9d085ead2e", 00:18:24.030 "is_configured": true, 00:18:24.030 "data_offset": 2048, 00:18:24.030 "data_size": 63488 00:18:24.030 }, 00:18:24.030 { 00:18:24.030 "name": "BaseBdev2", 00:18:24.030 "uuid": "f3e568a3-3c4e-4e4a-a415-9772c5a9d2ea", 00:18:24.030 "is_configured": true, 00:18:24.030 "data_offset": 2048, 00:18:24.030 "data_size": 63488 00:18:24.030 }, 00:18:24.030 { 00:18:24.030 "name": "BaseBdev3", 00:18:24.030 "uuid": "c6fbd681-121d-4df9-aae6-82774b4097fc", 00:18:24.030 "is_configured": true, 00:18:24.030 "data_offset": 2048, 00:18:24.030 "data_size": 63488 00:18:24.030 }, 00:18:24.030 { 00:18:24.030 "name": "BaseBdev4", 00:18:24.030 "uuid": "bb6fff10-6362-49e2-9b3d-4ebe4d5c132e", 00:18:24.030 "is_configured": true, 00:18:24.030 "data_offset": 2048, 00:18:24.030 "data_size": 63488 00:18:24.030 } 00:18:24.030 ] 00:18:24.030 } 00:18:24.030 } 00:18:24.030 }' 00:18:24.030 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:24.030 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:24.030 BaseBdev2 00:18:24.030 BaseBdev3 00:18:24.030 BaseBdev4' 00:18:24.030 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.290 [2024-11-27 04:35:20.813571] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:24.290 [2024-11-27 04:35:20.813613] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:24.290 [2024-11-27 04:35:20.813707] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.290 [2024-11-27 04:35:20.814050] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.290 [2024-11-27 04:35:20.814071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83796 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83796 ']' 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83796 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83796 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:24.290 killing process with pid 83796 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83796' 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83796 00:18:24.290 [2024-11-27 04:35:20.867150] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:24.290 04:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83796 00:18:24.861 [2024-11-27 04:35:21.339509] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:26.241 04:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:26.241 00:18:26.241 real 0m12.331s 00:18:26.241 user 0m19.540s 00:18:26.241 sys 0m2.155s 00:18:26.241 04:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.241 04:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.241 ************************************ 00:18:26.241 END TEST raid5f_state_function_test_sb 00:18:26.241 ************************************ 00:18:26.241 04:35:22 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:18:26.241 04:35:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:26.241 04:35:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:26.241 04:35:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:26.241 ************************************ 00:18:26.241 START TEST raid5f_superblock_test 00:18:26.241 ************************************ 00:18:26.241 04:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:18:26.241 04:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:18:26.241 04:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:18:26.241 04:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:26.241 04:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:26.241 04:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:26.241 04:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:26.241 04:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:26.241 04:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:26.241 04:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:26.241 04:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:26.241 04:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:26.241 04:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:26.241 04:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:26.242 04:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:18:26.242 04:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:26.242 04:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:26.242 04:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84473 00:18:26.242 04:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:26.242 04:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84473 00:18:26.242 04:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84473 ']' 00:18:26.242 04:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.242 04:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.242 04:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.242 04:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.242 04:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.502 [2024-11-27 04:35:22.829103] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:26.502 [2024-11-27 04:35:22.829223] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84473 ] 00:18:26.502 [2024-11-27 04:35:22.997457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.761 [2024-11-27 04:35:23.130068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.021 [2024-11-27 04:35:23.361693] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.021 [2024-11-27 04:35:23.361768] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.281 04:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.281 04:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:18:27.281 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:27.281 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:27.281 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:27.281 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:27.281 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:27.281 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:27.281 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:27.281 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:27.281 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:27.281 04:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.281 04:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.281 malloc1 00:18:27.281 04:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.281 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:27.281 04:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.281 04:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.281 [2024-11-27 04:35:23.807541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:27.281 [2024-11-27 04:35:23.807609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.281 [2024-11-27 04:35:23.807636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:27.281 [2024-11-27 04:35:23.807649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.282 [2024-11-27 04:35:23.810465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.282 [2024-11-27 04:35:23.810511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:27.282 pt1 00:18:27.282 04:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.282 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:27.282 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:27.282 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:27.282 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:27.282 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:27.282 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:27.282 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:27.282 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:27.282 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:27.282 04:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.282 04:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.282 malloc2 00:18:27.282 04:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.282 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:27.282 04:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.282 04:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.545 [2024-11-27 04:35:23.865958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:27.545 [2024-11-27 04:35:23.866048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.545 [2024-11-27 04:35:23.866082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:27.545 [2024-11-27 04:35:23.866095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.545 [2024-11-27 04:35:23.868762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.545 [2024-11-27 04:35:23.868816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:27.545 pt2 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.545 malloc3 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.545 [2024-11-27 04:35:23.938852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:27.545 [2024-11-27 04:35:23.938938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.545 [2024-11-27 04:35:23.938978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:27.545 [2024-11-27 04:35:23.938994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.545 [2024-11-27 04:35:23.941799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.545 [2024-11-27 04:35:23.941844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:27.545 pt3 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.545 malloc4 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.545 04:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.545 [2024-11-27 04:35:24.001926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:27.545 [2024-11-27 04:35:24.001993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.545 [2024-11-27 04:35:24.002017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:27.545 [2024-11-27 04:35:24.002027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.545 [2024-11-27 04:35:24.004516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.545 [2024-11-27 04:35:24.004557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:27.545 pt4 00:18:27.545 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.545 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:27.545 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:27.545 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:18:27.545 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.545 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.545 [2024-11-27 04:35:24.013943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:27.545 [2024-11-27 04:35:24.016056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:27.545 [2024-11-27 04:35:24.016175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:27.545 [2024-11-27 04:35:24.016234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:27.545 [2024-11-27 04:35:24.016458] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:27.545 [2024-11-27 04:35:24.016485] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:27.545 [2024-11-27 04:35:24.016783] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:27.545 [2024-11-27 04:35:24.025644] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:27.545 [2024-11-27 04:35:24.025674] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:27.545 [2024-11-27 04:35:24.025892] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.545 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.545 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:27.545 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.545 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.545 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:27.545 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:27.545 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:27.545 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.545 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.545 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.545 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.545 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.545 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.545 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.545 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.545 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.545 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.545 "name": "raid_bdev1", 00:18:27.545 "uuid": "622c9503-febd-4e8f-8121-db93cd69e3da", 00:18:27.545 "strip_size_kb": 64, 00:18:27.545 "state": "online", 00:18:27.545 "raid_level": "raid5f", 00:18:27.545 "superblock": true, 00:18:27.545 "num_base_bdevs": 4, 00:18:27.545 "num_base_bdevs_discovered": 4, 00:18:27.545 "num_base_bdevs_operational": 4, 00:18:27.545 "base_bdevs_list": [ 00:18:27.545 { 00:18:27.546 "name": "pt1", 00:18:27.546 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:27.546 "is_configured": true, 00:18:27.546 "data_offset": 2048, 00:18:27.546 "data_size": 63488 00:18:27.546 }, 00:18:27.546 { 00:18:27.546 "name": "pt2", 00:18:27.546 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:27.546 "is_configured": true, 00:18:27.546 "data_offset": 2048, 00:18:27.546 "data_size": 63488 00:18:27.546 }, 00:18:27.546 { 00:18:27.546 "name": "pt3", 00:18:27.546 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:27.546 "is_configured": true, 00:18:27.546 "data_offset": 2048, 00:18:27.546 "data_size": 63488 00:18:27.546 }, 00:18:27.546 { 00:18:27.546 "name": "pt4", 00:18:27.546 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:27.546 "is_configured": true, 00:18:27.546 "data_offset": 2048, 00:18:27.546 "data_size": 63488 00:18:27.546 } 00:18:27.546 ] 00:18:27.546 }' 00:18:27.546 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.546 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.120 [2024-11-27 04:35:24.467579] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:28.120 "name": "raid_bdev1", 00:18:28.120 "aliases": [ 00:18:28.120 "622c9503-febd-4e8f-8121-db93cd69e3da" 00:18:28.120 ], 00:18:28.120 "product_name": "Raid Volume", 00:18:28.120 "block_size": 512, 00:18:28.120 "num_blocks": 190464, 00:18:28.120 "uuid": "622c9503-febd-4e8f-8121-db93cd69e3da", 00:18:28.120 "assigned_rate_limits": { 00:18:28.120 "rw_ios_per_sec": 0, 00:18:28.120 "rw_mbytes_per_sec": 0, 00:18:28.120 "r_mbytes_per_sec": 0, 00:18:28.120 "w_mbytes_per_sec": 0 00:18:28.120 }, 00:18:28.120 "claimed": false, 00:18:28.120 "zoned": false, 00:18:28.120 "supported_io_types": { 00:18:28.120 "read": true, 00:18:28.120 "write": true, 00:18:28.120 "unmap": false, 00:18:28.120 "flush": false, 00:18:28.120 "reset": true, 00:18:28.120 "nvme_admin": false, 00:18:28.120 "nvme_io": false, 00:18:28.120 "nvme_io_md": false, 00:18:28.120 "write_zeroes": true, 00:18:28.120 "zcopy": false, 00:18:28.120 "get_zone_info": false, 00:18:28.120 "zone_management": false, 00:18:28.120 "zone_append": false, 00:18:28.120 "compare": false, 00:18:28.120 "compare_and_write": false, 00:18:28.120 "abort": false, 00:18:28.120 "seek_hole": false, 00:18:28.120 "seek_data": false, 00:18:28.120 "copy": false, 00:18:28.120 "nvme_iov_md": false 00:18:28.120 }, 00:18:28.120 "driver_specific": { 00:18:28.120 "raid": { 00:18:28.120 "uuid": "622c9503-febd-4e8f-8121-db93cd69e3da", 00:18:28.120 "strip_size_kb": 64, 00:18:28.120 "state": "online", 00:18:28.120 "raid_level": "raid5f", 00:18:28.120 "superblock": true, 00:18:28.120 "num_base_bdevs": 4, 00:18:28.120 "num_base_bdevs_discovered": 4, 00:18:28.120 "num_base_bdevs_operational": 4, 00:18:28.120 "base_bdevs_list": [ 00:18:28.120 { 00:18:28.120 "name": "pt1", 00:18:28.120 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:28.120 "is_configured": true, 00:18:28.120 "data_offset": 2048, 00:18:28.120 "data_size": 63488 00:18:28.120 }, 00:18:28.120 { 00:18:28.120 "name": "pt2", 00:18:28.120 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:28.120 "is_configured": true, 00:18:28.120 "data_offset": 2048, 00:18:28.120 "data_size": 63488 00:18:28.120 }, 00:18:28.120 { 00:18:28.120 "name": "pt3", 00:18:28.120 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:28.120 "is_configured": true, 00:18:28.120 "data_offset": 2048, 00:18:28.120 "data_size": 63488 00:18:28.120 }, 00:18:28.120 { 00:18:28.120 "name": "pt4", 00:18:28.120 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:28.120 "is_configured": true, 00:18:28.120 "data_offset": 2048, 00:18:28.120 "data_size": 63488 00:18:28.120 } 00:18:28.120 ] 00:18:28.120 } 00:18:28.120 } 00:18:28.120 }' 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:28.120 pt2 00:18:28.120 pt3 00:18:28.120 pt4' 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.120 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.380 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:28.380 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:28.380 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:28.380 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:28.380 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:28.380 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.380 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.380 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.380 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.381 [2024-11-27 04:35:24.802998] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=622c9503-febd-4e8f-8121-db93cd69e3da 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 622c9503-febd-4e8f-8121-db93cd69e3da ']' 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.381 [2024-11-27 04:35:24.850723] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:28.381 [2024-11-27 04:35:24.850759] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:28.381 [2024-11-27 04:35:24.850861] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:28.381 [2024-11-27 04:35:24.850956] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:28.381 [2024-11-27 04:35:24.850973] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.381 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.642 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.642 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:28.642 04:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:28.642 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:18:28.642 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:28.642 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:28.642 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.642 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:28.642 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.642 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:28.642 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.642 04:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.642 [2024-11-27 04:35:25.006499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:28.642 [2024-11-27 04:35:25.008634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:28.642 [2024-11-27 04:35:25.008699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:28.642 [2024-11-27 04:35:25.008739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:28.642 [2024-11-27 04:35:25.008796] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:28.642 [2024-11-27 04:35:25.008849] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:28.642 [2024-11-27 04:35:25.008870] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:28.642 [2024-11-27 04:35:25.008893] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:28.642 [2024-11-27 04:35:25.008909] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:28.642 [2024-11-27 04:35:25.008922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:28.642 request: 00:18:28.642 { 00:18:28.642 "name": "raid_bdev1", 00:18:28.642 "raid_level": "raid5f", 00:18:28.642 "base_bdevs": [ 00:18:28.642 "malloc1", 00:18:28.642 "malloc2", 00:18:28.642 "malloc3", 00:18:28.642 "malloc4" 00:18:28.642 ], 00:18:28.642 "strip_size_kb": 64, 00:18:28.642 "superblock": false, 00:18:28.642 "method": "bdev_raid_create", 00:18:28.642 "req_id": 1 00:18:28.642 } 00:18:28.642 Got JSON-RPC error response 00:18:28.642 response: 00:18:28.642 { 00:18:28.642 "code": -17, 00:18:28.642 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:28.642 } 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.642 [2024-11-27 04:35:25.070345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:28.642 [2024-11-27 04:35:25.070416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.642 [2024-11-27 04:35:25.070437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:28.642 [2024-11-27 04:35:25.070449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.642 [2024-11-27 04:35:25.072997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.642 [2024-11-27 04:35:25.073040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:28.642 [2024-11-27 04:35:25.073147] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:28.642 [2024-11-27 04:35:25.073215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:28.642 pt1 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.642 04:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.643 04:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.643 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.643 "name": "raid_bdev1", 00:18:28.643 "uuid": "622c9503-febd-4e8f-8121-db93cd69e3da", 00:18:28.643 "strip_size_kb": 64, 00:18:28.643 "state": "configuring", 00:18:28.643 "raid_level": "raid5f", 00:18:28.643 "superblock": true, 00:18:28.643 "num_base_bdevs": 4, 00:18:28.643 "num_base_bdevs_discovered": 1, 00:18:28.643 "num_base_bdevs_operational": 4, 00:18:28.643 "base_bdevs_list": [ 00:18:28.643 { 00:18:28.643 "name": "pt1", 00:18:28.643 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:28.643 "is_configured": true, 00:18:28.643 "data_offset": 2048, 00:18:28.643 "data_size": 63488 00:18:28.643 }, 00:18:28.643 { 00:18:28.643 "name": null, 00:18:28.643 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:28.643 "is_configured": false, 00:18:28.643 "data_offset": 2048, 00:18:28.643 "data_size": 63488 00:18:28.643 }, 00:18:28.643 { 00:18:28.643 "name": null, 00:18:28.643 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:28.643 "is_configured": false, 00:18:28.643 "data_offset": 2048, 00:18:28.643 "data_size": 63488 00:18:28.643 }, 00:18:28.643 { 00:18:28.643 "name": null, 00:18:28.643 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:28.643 "is_configured": false, 00:18:28.643 "data_offset": 2048, 00:18:28.643 "data_size": 63488 00:18:28.643 } 00:18:28.643 ] 00:18:28.643 }' 00:18:28.643 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.643 04:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.212 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:18:29.212 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:29.212 04:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.212 04:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.212 [2024-11-27 04:35:25.557542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:29.212 [2024-11-27 04:35:25.557621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.212 [2024-11-27 04:35:25.557643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:29.212 [2024-11-27 04:35:25.557656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.212 [2024-11-27 04:35:25.558165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.212 [2024-11-27 04:35:25.558189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:29.212 [2024-11-27 04:35:25.558279] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:29.212 [2024-11-27 04:35:25.558308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:29.212 pt2 00:18:29.212 04:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.212 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:29.212 04:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.212 04:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.212 [2024-11-27 04:35:25.569527] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:29.212 04:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.212 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:18:29.212 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.212 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:29.212 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:29.212 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:29.212 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:29.212 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.212 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.212 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.212 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.212 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.212 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.212 04:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.212 04:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.212 04:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.213 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.213 "name": "raid_bdev1", 00:18:29.213 "uuid": "622c9503-febd-4e8f-8121-db93cd69e3da", 00:18:29.213 "strip_size_kb": 64, 00:18:29.213 "state": "configuring", 00:18:29.213 "raid_level": "raid5f", 00:18:29.213 "superblock": true, 00:18:29.213 "num_base_bdevs": 4, 00:18:29.213 "num_base_bdevs_discovered": 1, 00:18:29.213 "num_base_bdevs_operational": 4, 00:18:29.213 "base_bdevs_list": [ 00:18:29.213 { 00:18:29.213 "name": "pt1", 00:18:29.213 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:29.213 "is_configured": true, 00:18:29.213 "data_offset": 2048, 00:18:29.213 "data_size": 63488 00:18:29.213 }, 00:18:29.213 { 00:18:29.213 "name": null, 00:18:29.213 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:29.213 "is_configured": false, 00:18:29.213 "data_offset": 0, 00:18:29.213 "data_size": 63488 00:18:29.213 }, 00:18:29.213 { 00:18:29.213 "name": null, 00:18:29.213 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:29.213 "is_configured": false, 00:18:29.213 "data_offset": 2048, 00:18:29.213 "data_size": 63488 00:18:29.213 }, 00:18:29.213 { 00:18:29.213 "name": null, 00:18:29.213 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:29.213 "is_configured": false, 00:18:29.213 "data_offset": 2048, 00:18:29.213 "data_size": 63488 00:18:29.213 } 00:18:29.213 ] 00:18:29.213 }' 00:18:29.213 04:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.213 04:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.782 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:29.782 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:29.782 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:29.782 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.782 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.782 [2024-11-27 04:35:26.064716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:29.782 [2024-11-27 04:35:26.064787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.782 [2024-11-27 04:35:26.064811] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:29.782 [2024-11-27 04:35:26.064821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.782 [2024-11-27 04:35:26.065337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.782 [2024-11-27 04:35:26.065361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:29.782 [2024-11-27 04:35:26.065449] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:29.782 [2024-11-27 04:35:26.065474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:29.782 pt2 00:18:29.782 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.782 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:29.782 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:29.782 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:29.782 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.783 [2024-11-27 04:35:26.076678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:29.783 [2024-11-27 04:35:26.076738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.783 [2024-11-27 04:35:26.076765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:29.783 [2024-11-27 04:35:26.076777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.783 [2024-11-27 04:35:26.077247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.783 [2024-11-27 04:35:26.077271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:29.783 [2024-11-27 04:35:26.077349] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:29.783 [2024-11-27 04:35:26.077384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:29.783 pt3 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.783 [2024-11-27 04:35:26.084627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:29.783 [2024-11-27 04:35:26.084684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.783 [2024-11-27 04:35:26.084702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:29.783 [2024-11-27 04:35:26.084711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.783 [2024-11-27 04:35:26.085142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.783 [2024-11-27 04:35:26.085165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:29.783 [2024-11-27 04:35:26.085235] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:29.783 [2024-11-27 04:35:26.085258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:29.783 [2024-11-27 04:35:26.085408] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:29.783 [2024-11-27 04:35:26.085418] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:29.783 [2024-11-27 04:35:26.085681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:29.783 [2024-11-27 04:35:26.093539] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:29.783 [2024-11-27 04:35:26.093584] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:29.783 [2024-11-27 04:35:26.093770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.783 pt4 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.783 "name": "raid_bdev1", 00:18:29.783 "uuid": "622c9503-febd-4e8f-8121-db93cd69e3da", 00:18:29.783 "strip_size_kb": 64, 00:18:29.783 "state": "online", 00:18:29.783 "raid_level": "raid5f", 00:18:29.783 "superblock": true, 00:18:29.783 "num_base_bdevs": 4, 00:18:29.783 "num_base_bdevs_discovered": 4, 00:18:29.783 "num_base_bdevs_operational": 4, 00:18:29.783 "base_bdevs_list": [ 00:18:29.783 { 00:18:29.783 "name": "pt1", 00:18:29.783 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:29.783 "is_configured": true, 00:18:29.783 "data_offset": 2048, 00:18:29.783 "data_size": 63488 00:18:29.783 }, 00:18:29.783 { 00:18:29.783 "name": "pt2", 00:18:29.783 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:29.783 "is_configured": true, 00:18:29.783 "data_offset": 2048, 00:18:29.783 "data_size": 63488 00:18:29.783 }, 00:18:29.783 { 00:18:29.783 "name": "pt3", 00:18:29.783 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:29.783 "is_configured": true, 00:18:29.783 "data_offset": 2048, 00:18:29.783 "data_size": 63488 00:18:29.783 }, 00:18:29.783 { 00:18:29.783 "name": "pt4", 00:18:29.783 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:29.783 "is_configured": true, 00:18:29.783 "data_offset": 2048, 00:18:29.783 "data_size": 63488 00:18:29.783 } 00:18:29.783 ] 00:18:29.783 }' 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.783 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.043 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:30.043 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:30.043 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:30.043 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:30.043 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:30.043 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:30.043 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:30.043 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.043 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:30.043 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.043 [2024-11-27 04:35:26.571313] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:30.043 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.043 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:30.043 "name": "raid_bdev1", 00:18:30.043 "aliases": [ 00:18:30.043 "622c9503-febd-4e8f-8121-db93cd69e3da" 00:18:30.043 ], 00:18:30.043 "product_name": "Raid Volume", 00:18:30.043 "block_size": 512, 00:18:30.043 "num_blocks": 190464, 00:18:30.043 "uuid": "622c9503-febd-4e8f-8121-db93cd69e3da", 00:18:30.043 "assigned_rate_limits": { 00:18:30.043 "rw_ios_per_sec": 0, 00:18:30.043 "rw_mbytes_per_sec": 0, 00:18:30.043 "r_mbytes_per_sec": 0, 00:18:30.043 "w_mbytes_per_sec": 0 00:18:30.043 }, 00:18:30.043 "claimed": false, 00:18:30.043 "zoned": false, 00:18:30.043 "supported_io_types": { 00:18:30.043 "read": true, 00:18:30.043 "write": true, 00:18:30.043 "unmap": false, 00:18:30.043 "flush": false, 00:18:30.043 "reset": true, 00:18:30.043 "nvme_admin": false, 00:18:30.043 "nvme_io": false, 00:18:30.043 "nvme_io_md": false, 00:18:30.043 "write_zeroes": true, 00:18:30.043 "zcopy": false, 00:18:30.043 "get_zone_info": false, 00:18:30.043 "zone_management": false, 00:18:30.043 "zone_append": false, 00:18:30.043 "compare": false, 00:18:30.043 "compare_and_write": false, 00:18:30.043 "abort": false, 00:18:30.043 "seek_hole": false, 00:18:30.043 "seek_data": false, 00:18:30.043 "copy": false, 00:18:30.043 "nvme_iov_md": false 00:18:30.043 }, 00:18:30.043 "driver_specific": { 00:18:30.043 "raid": { 00:18:30.043 "uuid": "622c9503-febd-4e8f-8121-db93cd69e3da", 00:18:30.043 "strip_size_kb": 64, 00:18:30.043 "state": "online", 00:18:30.043 "raid_level": "raid5f", 00:18:30.043 "superblock": true, 00:18:30.043 "num_base_bdevs": 4, 00:18:30.043 "num_base_bdevs_discovered": 4, 00:18:30.043 "num_base_bdevs_operational": 4, 00:18:30.043 "base_bdevs_list": [ 00:18:30.043 { 00:18:30.043 "name": "pt1", 00:18:30.043 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:30.043 "is_configured": true, 00:18:30.043 "data_offset": 2048, 00:18:30.043 "data_size": 63488 00:18:30.043 }, 00:18:30.043 { 00:18:30.043 "name": "pt2", 00:18:30.043 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:30.043 "is_configured": true, 00:18:30.043 "data_offset": 2048, 00:18:30.043 "data_size": 63488 00:18:30.043 }, 00:18:30.043 { 00:18:30.043 "name": "pt3", 00:18:30.043 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:30.043 "is_configured": true, 00:18:30.043 "data_offset": 2048, 00:18:30.043 "data_size": 63488 00:18:30.043 }, 00:18:30.043 { 00:18:30.043 "name": "pt4", 00:18:30.043 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:30.043 "is_configured": true, 00:18:30.043 "data_offset": 2048, 00:18:30.043 "data_size": 63488 00:18:30.043 } 00:18:30.043 ] 00:18:30.043 } 00:18:30.043 } 00:18:30.043 }' 00:18:30.043 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:30.303 pt2 00:18:30.303 pt3 00:18:30.303 pt4' 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.303 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:30.564 [2024-11-27 04:35:26.906703] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 622c9503-febd-4e8f-8121-db93cd69e3da '!=' 622c9503-febd-4e8f-8121-db93cd69e3da ']' 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.564 [2024-11-27 04:35:26.942476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.564 04:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.564 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.564 "name": "raid_bdev1", 00:18:30.564 "uuid": "622c9503-febd-4e8f-8121-db93cd69e3da", 00:18:30.564 "strip_size_kb": 64, 00:18:30.564 "state": "online", 00:18:30.564 "raid_level": "raid5f", 00:18:30.564 "superblock": true, 00:18:30.564 "num_base_bdevs": 4, 00:18:30.564 "num_base_bdevs_discovered": 3, 00:18:30.564 "num_base_bdevs_operational": 3, 00:18:30.564 "base_bdevs_list": [ 00:18:30.564 { 00:18:30.564 "name": null, 00:18:30.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.564 "is_configured": false, 00:18:30.564 "data_offset": 0, 00:18:30.564 "data_size": 63488 00:18:30.564 }, 00:18:30.564 { 00:18:30.564 "name": "pt2", 00:18:30.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:30.564 "is_configured": true, 00:18:30.564 "data_offset": 2048, 00:18:30.564 "data_size": 63488 00:18:30.564 }, 00:18:30.564 { 00:18:30.564 "name": "pt3", 00:18:30.564 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:30.564 "is_configured": true, 00:18:30.564 "data_offset": 2048, 00:18:30.564 "data_size": 63488 00:18:30.564 }, 00:18:30.564 { 00:18:30.564 "name": "pt4", 00:18:30.564 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:30.564 "is_configured": true, 00:18:30.564 "data_offset": 2048, 00:18:30.564 "data_size": 63488 00:18:30.564 } 00:18:30.564 ] 00:18:30.564 }' 00:18:30.564 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.564 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.824 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:30.824 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.824 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.824 [2024-11-27 04:35:27.405644] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:30.824 [2024-11-27 04:35:27.405686] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:30.824 [2024-11-27 04:35:27.405781] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:30.824 [2024-11-27 04:35:27.405869] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:30.824 [2024-11-27 04:35:27.405881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:31.085 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.085 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:31.085 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.085 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.085 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.086 [2024-11-27 04:35:27.505486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:31.086 [2024-11-27 04:35:27.505570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.086 [2024-11-27 04:35:27.505591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:31.086 [2024-11-27 04:35:27.505602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.086 [2024-11-27 04:35:27.508146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.086 [2024-11-27 04:35:27.508183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:31.086 [2024-11-27 04:35:27.508279] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:31.086 [2024-11-27 04:35:27.508345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:31.086 pt2 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.086 "name": "raid_bdev1", 00:18:31.086 "uuid": "622c9503-febd-4e8f-8121-db93cd69e3da", 00:18:31.086 "strip_size_kb": 64, 00:18:31.086 "state": "configuring", 00:18:31.086 "raid_level": "raid5f", 00:18:31.086 "superblock": true, 00:18:31.086 "num_base_bdevs": 4, 00:18:31.086 "num_base_bdevs_discovered": 1, 00:18:31.086 "num_base_bdevs_operational": 3, 00:18:31.086 "base_bdevs_list": [ 00:18:31.086 { 00:18:31.086 "name": null, 00:18:31.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.086 "is_configured": false, 00:18:31.086 "data_offset": 2048, 00:18:31.086 "data_size": 63488 00:18:31.086 }, 00:18:31.086 { 00:18:31.086 "name": "pt2", 00:18:31.086 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.086 "is_configured": true, 00:18:31.086 "data_offset": 2048, 00:18:31.086 "data_size": 63488 00:18:31.086 }, 00:18:31.086 { 00:18:31.086 "name": null, 00:18:31.086 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:31.086 "is_configured": false, 00:18:31.086 "data_offset": 2048, 00:18:31.086 "data_size": 63488 00:18:31.086 }, 00:18:31.086 { 00:18:31.086 "name": null, 00:18:31.086 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:31.086 "is_configured": false, 00:18:31.086 "data_offset": 2048, 00:18:31.086 "data_size": 63488 00:18:31.086 } 00:18:31.086 ] 00:18:31.086 }' 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.086 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.656 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:31.656 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:31.656 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:31.656 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.656 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.656 [2024-11-27 04:35:27.964738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:31.656 [2024-11-27 04:35:27.964834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.656 [2024-11-27 04:35:27.964864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:31.656 [2024-11-27 04:35:27.964874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.656 [2024-11-27 04:35:27.965378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.656 [2024-11-27 04:35:27.965407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:31.656 [2024-11-27 04:35:27.965504] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:31.656 [2024-11-27 04:35:27.965532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:31.656 pt3 00:18:31.656 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.656 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:31.656 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.656 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.656 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:31.656 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.656 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:31.656 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.656 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.656 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.656 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.656 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.656 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.656 04:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.656 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.656 04:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.656 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.656 "name": "raid_bdev1", 00:18:31.656 "uuid": "622c9503-febd-4e8f-8121-db93cd69e3da", 00:18:31.656 "strip_size_kb": 64, 00:18:31.656 "state": "configuring", 00:18:31.656 "raid_level": "raid5f", 00:18:31.656 "superblock": true, 00:18:31.656 "num_base_bdevs": 4, 00:18:31.656 "num_base_bdevs_discovered": 2, 00:18:31.656 "num_base_bdevs_operational": 3, 00:18:31.656 "base_bdevs_list": [ 00:18:31.656 { 00:18:31.656 "name": null, 00:18:31.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.656 "is_configured": false, 00:18:31.656 "data_offset": 2048, 00:18:31.656 "data_size": 63488 00:18:31.656 }, 00:18:31.656 { 00:18:31.656 "name": "pt2", 00:18:31.656 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.656 "is_configured": true, 00:18:31.656 "data_offset": 2048, 00:18:31.656 "data_size": 63488 00:18:31.656 }, 00:18:31.656 { 00:18:31.656 "name": "pt3", 00:18:31.656 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:31.656 "is_configured": true, 00:18:31.656 "data_offset": 2048, 00:18:31.656 "data_size": 63488 00:18:31.656 }, 00:18:31.656 { 00:18:31.656 "name": null, 00:18:31.656 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:31.656 "is_configured": false, 00:18:31.656 "data_offset": 2048, 00:18:31.656 "data_size": 63488 00:18:31.656 } 00:18:31.656 ] 00:18:31.656 }' 00:18:31.656 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.656 04:35:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.916 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:31.916 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:31.916 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:18:31.916 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:31.916 04:35:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.916 04:35:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.916 [2024-11-27 04:35:28.447921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:31.916 [2024-11-27 04:35:28.447991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.916 [2024-11-27 04:35:28.448015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:31.916 [2024-11-27 04:35:28.448025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.916 [2024-11-27 04:35:28.448544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.916 [2024-11-27 04:35:28.448565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:31.916 [2024-11-27 04:35:28.448655] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:31.916 [2024-11-27 04:35:28.448698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:31.916 [2024-11-27 04:35:28.448861] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:31.916 [2024-11-27 04:35:28.448871] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:31.916 [2024-11-27 04:35:28.449147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:31.916 [2024-11-27 04:35:28.457501] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:31.916 [2024-11-27 04:35:28.457544] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:31.916 [2024-11-27 04:35:28.457921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.916 pt4 00:18:31.916 04:35:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.916 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:31.916 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.916 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.916 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:31.916 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.916 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:31.916 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.916 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.916 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.916 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.916 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.916 04:35:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.916 04:35:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.916 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.916 04:35:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.176 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.176 "name": "raid_bdev1", 00:18:32.176 "uuid": "622c9503-febd-4e8f-8121-db93cd69e3da", 00:18:32.176 "strip_size_kb": 64, 00:18:32.176 "state": "online", 00:18:32.176 "raid_level": "raid5f", 00:18:32.176 "superblock": true, 00:18:32.176 "num_base_bdevs": 4, 00:18:32.176 "num_base_bdevs_discovered": 3, 00:18:32.176 "num_base_bdevs_operational": 3, 00:18:32.176 "base_bdevs_list": [ 00:18:32.176 { 00:18:32.176 "name": null, 00:18:32.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.176 "is_configured": false, 00:18:32.176 "data_offset": 2048, 00:18:32.176 "data_size": 63488 00:18:32.176 }, 00:18:32.176 { 00:18:32.176 "name": "pt2", 00:18:32.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:32.176 "is_configured": true, 00:18:32.176 "data_offset": 2048, 00:18:32.176 "data_size": 63488 00:18:32.176 }, 00:18:32.176 { 00:18:32.176 "name": "pt3", 00:18:32.176 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:32.176 "is_configured": true, 00:18:32.176 "data_offset": 2048, 00:18:32.176 "data_size": 63488 00:18:32.176 }, 00:18:32.176 { 00:18:32.176 "name": "pt4", 00:18:32.176 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:32.176 "is_configured": true, 00:18:32.176 "data_offset": 2048, 00:18:32.176 "data_size": 63488 00:18:32.176 } 00:18:32.176 ] 00:18:32.176 }' 00:18:32.176 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.177 04:35:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.447 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:32.447 04:35:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.447 04:35:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.447 [2024-11-27 04:35:28.923963] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:32.447 [2024-11-27 04:35:28.924005] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:32.447 [2024-11-27 04:35:28.924115] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:32.447 [2024-11-27 04:35:28.924209] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:32.447 [2024-11-27 04:35:28.924225] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:32.447 04:35:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.447 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.447 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:32.447 04:35:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.447 04:35:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.447 04:35:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.447 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:32.447 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:32.447 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:18:32.447 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:18:32.447 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:18:32.447 04:35:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.447 04:35:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.447 04:35:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.447 04:35:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:32.447 04:35:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.447 04:35:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.447 [2024-11-27 04:35:28.999849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:32.447 [2024-11-27 04:35:28.999935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.447 [2024-11-27 04:35:28.999968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:18:32.447 [2024-11-27 04:35:28.999987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.447 [2024-11-27 04:35:29.002843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.447 [2024-11-27 04:35:29.002894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:32.447 [2024-11-27 04:35:29.003006] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:32.447 [2024-11-27 04:35:29.003068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:32.447 [2024-11-27 04:35:29.003246] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:32.447 [2024-11-27 04:35:29.003264] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:32.447 [2024-11-27 04:35:29.003284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:32.447 [2024-11-27 04:35:29.003373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:32.447 [2024-11-27 04:35:29.003533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:32.447 pt1 00:18:32.447 04:35:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.447 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:18:32.447 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:32.447 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.447 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:32.447 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:32.447 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:32.447 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:32.447 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.447 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.447 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.447 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.447 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.447 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.447 04:35:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.447 04:35:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.723 04:35:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.723 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.723 "name": "raid_bdev1", 00:18:32.723 "uuid": "622c9503-febd-4e8f-8121-db93cd69e3da", 00:18:32.723 "strip_size_kb": 64, 00:18:32.723 "state": "configuring", 00:18:32.723 "raid_level": "raid5f", 00:18:32.723 "superblock": true, 00:18:32.723 "num_base_bdevs": 4, 00:18:32.723 "num_base_bdevs_discovered": 2, 00:18:32.723 "num_base_bdevs_operational": 3, 00:18:32.723 "base_bdevs_list": [ 00:18:32.723 { 00:18:32.723 "name": null, 00:18:32.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.723 "is_configured": false, 00:18:32.723 "data_offset": 2048, 00:18:32.723 "data_size": 63488 00:18:32.723 }, 00:18:32.723 { 00:18:32.723 "name": "pt2", 00:18:32.723 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:32.723 "is_configured": true, 00:18:32.723 "data_offset": 2048, 00:18:32.723 "data_size": 63488 00:18:32.723 }, 00:18:32.723 { 00:18:32.723 "name": "pt3", 00:18:32.723 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:32.723 "is_configured": true, 00:18:32.723 "data_offset": 2048, 00:18:32.723 "data_size": 63488 00:18:32.723 }, 00:18:32.723 { 00:18:32.723 "name": null, 00:18:32.723 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:32.723 "is_configured": false, 00:18:32.723 "data_offset": 2048, 00:18:32.723 "data_size": 63488 00:18:32.723 } 00:18:32.723 ] 00:18:32.723 }' 00:18:32.723 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.723 04:35:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.984 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:32.984 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:32.984 04:35:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.984 04:35:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.984 04:35:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.984 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:32.984 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:32.984 04:35:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.984 04:35:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.984 [2024-11-27 04:35:29.523295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:32.984 [2024-11-27 04:35:29.523367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.984 [2024-11-27 04:35:29.523393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:32.984 [2024-11-27 04:35:29.523404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.984 [2024-11-27 04:35:29.523987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.984 [2024-11-27 04:35:29.524010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:32.984 [2024-11-27 04:35:29.524138] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:32.984 [2024-11-27 04:35:29.524168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:32.984 [2024-11-27 04:35:29.524346] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:32.984 [2024-11-27 04:35:29.524357] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:32.984 [2024-11-27 04:35:29.524690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:32.984 [2024-11-27 04:35:29.534235] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:32.984 [2024-11-27 04:35:29.534275] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:32.984 [2024-11-27 04:35:29.534652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.984 pt4 00:18:32.984 04:35:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.984 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:32.984 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.984 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.984 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:32.984 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:32.984 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:32.984 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.984 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.984 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.984 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.984 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.984 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.984 04:35:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.984 04:35:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.984 04:35:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.243 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.243 "name": "raid_bdev1", 00:18:33.243 "uuid": "622c9503-febd-4e8f-8121-db93cd69e3da", 00:18:33.243 "strip_size_kb": 64, 00:18:33.243 "state": "online", 00:18:33.243 "raid_level": "raid5f", 00:18:33.243 "superblock": true, 00:18:33.243 "num_base_bdevs": 4, 00:18:33.243 "num_base_bdevs_discovered": 3, 00:18:33.243 "num_base_bdevs_operational": 3, 00:18:33.243 "base_bdevs_list": [ 00:18:33.243 { 00:18:33.243 "name": null, 00:18:33.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.243 "is_configured": false, 00:18:33.243 "data_offset": 2048, 00:18:33.243 "data_size": 63488 00:18:33.243 }, 00:18:33.243 { 00:18:33.243 "name": "pt2", 00:18:33.243 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:33.243 "is_configured": true, 00:18:33.243 "data_offset": 2048, 00:18:33.243 "data_size": 63488 00:18:33.243 }, 00:18:33.243 { 00:18:33.243 "name": "pt3", 00:18:33.243 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:33.243 "is_configured": true, 00:18:33.243 "data_offset": 2048, 00:18:33.243 "data_size": 63488 00:18:33.243 }, 00:18:33.243 { 00:18:33.243 "name": "pt4", 00:18:33.243 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:33.243 "is_configured": true, 00:18:33.243 "data_offset": 2048, 00:18:33.243 "data_size": 63488 00:18:33.243 } 00:18:33.243 ] 00:18:33.243 }' 00:18:33.243 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.243 04:35:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.503 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:33.503 04:35:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:33.503 04:35:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.503 04:35:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.503 04:35:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.503 04:35:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:33.503 04:35:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:33.503 04:35:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:33.503 04:35:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.503 04:35:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.503 [2024-11-27 04:35:30.049569] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:33.503 04:35:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.503 04:35:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 622c9503-febd-4e8f-8121-db93cd69e3da '!=' 622c9503-febd-4e8f-8121-db93cd69e3da ']' 00:18:33.503 04:35:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84473 00:18:33.503 04:35:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84473 ']' 00:18:33.503 04:35:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84473 00:18:33.762 04:35:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:18:33.762 04:35:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.762 04:35:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84473 00:18:33.762 04:35:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:33.762 04:35:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:33.762 killing process with pid 84473 00:18:33.762 04:35:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84473' 00:18:33.762 04:35:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84473 00:18:33.762 [2024-11-27 04:35:30.120036] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:33.762 [2024-11-27 04:35:30.120177] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.762 04:35:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84473 00:18:33.762 [2024-11-27 04:35:30.120277] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:33.762 [2024-11-27 04:35:30.120298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:34.022 [2024-11-27 04:35:30.590230] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:35.422 04:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:35.422 00:18:35.422 real 0m9.218s 00:18:35.422 user 0m14.431s 00:18:35.422 sys 0m1.617s 00:18:35.422 04:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:35.422 04:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.422 ************************************ 00:18:35.422 END TEST raid5f_superblock_test 00:18:35.422 ************************************ 00:18:35.422 04:35:32 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:18:35.422 04:35:32 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:18:35.422 04:35:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:35.422 04:35:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:35.422 04:35:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:35.682 ************************************ 00:18:35.682 START TEST raid5f_rebuild_test 00:18:35.682 ************************************ 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84965 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84965 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84965 ']' 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.682 04:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.682 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:35.682 Zero copy mechanism will not be used. 00:18:35.682 [2024-11-27 04:35:32.137219] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:35.682 [2024-11-27 04:35:32.137344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84965 ] 00:18:35.941 [2024-11-27 04:35:32.305267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.941 [2024-11-27 04:35:32.441723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.200 [2024-11-27 04:35:32.684690] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:36.200 [2024-11-27 04:35:32.684731] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:36.460 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.460 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:18:36.460 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:36.460 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:36.460 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.460 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.719 BaseBdev1_malloc 00:18:36.719 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.719 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:36.719 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.719 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.719 [2024-11-27 04:35:33.081437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:36.719 [2024-11-27 04:35:33.081528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.719 [2024-11-27 04:35:33.081554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:36.719 [2024-11-27 04:35:33.081568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.719 [2024-11-27 04:35:33.084065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.719 [2024-11-27 04:35:33.084125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:36.719 BaseBdev1 00:18:36.719 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.719 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:36.719 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:36.719 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.719 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.719 BaseBdev2_malloc 00:18:36.719 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.719 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:36.719 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.719 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.719 [2024-11-27 04:35:33.143263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:36.719 [2024-11-27 04:35:33.143339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.719 [2024-11-27 04:35:33.143368] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:36.719 [2024-11-27 04:35:33.143381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.719 [2024-11-27 04:35:33.145720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.719 [2024-11-27 04:35:33.145761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:36.719 BaseBdev2 00:18:36.719 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.719 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:36.719 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:36.719 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.719 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.719 BaseBdev3_malloc 00:18:36.719 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.719 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:36.719 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.719 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.719 [2024-11-27 04:35:33.216482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:36.719 [2024-11-27 04:35:33.216556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.719 [2024-11-27 04:35:33.216583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:36.719 [2024-11-27 04:35:33.216597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.719 [2024-11-27 04:35:33.219090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.719 [2024-11-27 04:35:33.219153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:36.719 BaseBdev3 00:18:36.719 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.719 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:36.720 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:36.720 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.720 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.720 BaseBdev4_malloc 00:18:36.720 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.720 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:36.720 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.720 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.720 [2024-11-27 04:35:33.273339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:36.720 [2024-11-27 04:35:33.273409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.720 [2024-11-27 04:35:33.273434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:36.720 [2024-11-27 04:35:33.273446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.720 [2024-11-27 04:35:33.275799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.720 [2024-11-27 04:35:33.275844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:36.720 BaseBdev4 00:18:36.720 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.720 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:36.720 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.720 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.001 spare_malloc 00:18:37.001 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.001 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:37.001 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.001 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.001 spare_delay 00:18:37.001 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.001 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:37.002 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.002 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.002 [2024-11-27 04:35:33.339539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:37.002 [2024-11-27 04:35:33.339602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.002 [2024-11-27 04:35:33.339626] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:37.002 [2024-11-27 04:35:33.339639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.002 [2024-11-27 04:35:33.342108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.002 [2024-11-27 04:35:33.342148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:37.002 spare 00:18:37.002 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.002 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:37.002 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.002 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.002 [2024-11-27 04:35:33.351569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:37.002 [2024-11-27 04:35:33.353678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:37.002 [2024-11-27 04:35:33.353758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:37.002 [2024-11-27 04:35:33.353823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:37.002 [2024-11-27 04:35:33.353935] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:37.002 [2024-11-27 04:35:33.353959] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:37.002 [2024-11-27 04:35:33.354291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:37.002 [2024-11-27 04:35:33.363816] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:37.002 [2024-11-27 04:35:33.363844] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:37.002 [2024-11-27 04:35:33.364136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.002 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.002 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:37.002 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.002 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.002 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:37.002 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:37.002 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:37.002 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.002 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.002 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.002 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.002 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.002 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.002 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.002 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.002 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.002 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.002 "name": "raid_bdev1", 00:18:37.002 "uuid": "afbf01d6-bed0-4bf6-b22a-54aa6e8a2017", 00:18:37.002 "strip_size_kb": 64, 00:18:37.002 "state": "online", 00:18:37.002 "raid_level": "raid5f", 00:18:37.002 "superblock": false, 00:18:37.002 "num_base_bdevs": 4, 00:18:37.002 "num_base_bdevs_discovered": 4, 00:18:37.002 "num_base_bdevs_operational": 4, 00:18:37.002 "base_bdevs_list": [ 00:18:37.002 { 00:18:37.002 "name": "BaseBdev1", 00:18:37.002 "uuid": "5436707f-caa1-5c12-b5ff-064a6b003242", 00:18:37.002 "is_configured": true, 00:18:37.002 "data_offset": 0, 00:18:37.002 "data_size": 65536 00:18:37.002 }, 00:18:37.002 { 00:18:37.002 "name": "BaseBdev2", 00:18:37.002 "uuid": "02a660da-153e-58f6-af2a-8ab732934fc8", 00:18:37.002 "is_configured": true, 00:18:37.002 "data_offset": 0, 00:18:37.002 "data_size": 65536 00:18:37.002 }, 00:18:37.002 { 00:18:37.002 "name": "BaseBdev3", 00:18:37.002 "uuid": "f70a35d6-c4b5-5914-aa6c-a2fe6443674b", 00:18:37.002 "is_configured": true, 00:18:37.002 "data_offset": 0, 00:18:37.002 "data_size": 65536 00:18:37.002 }, 00:18:37.002 { 00:18:37.002 "name": "BaseBdev4", 00:18:37.002 "uuid": "00cc9646-0ed4-52e9-a63b-ee0deee624a5", 00:18:37.002 "is_configured": true, 00:18:37.002 "data_offset": 0, 00:18:37.002 "data_size": 65536 00:18:37.002 } 00:18:37.002 ] 00:18:37.002 }' 00:18:37.002 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.002 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.263 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:37.263 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.263 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.263 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:37.263 [2024-11-27 04:35:33.833288] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:37.263 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.522 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:18:37.522 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:37.522 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.523 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.523 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.523 04:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.523 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:37.523 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:37.523 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:37.523 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:37.523 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:37.523 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:37.523 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:37.523 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:37.523 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:37.523 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:37.523 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:37.523 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:37.523 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:37.523 04:35:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:37.781 [2024-11-27 04:35:34.148564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:37.781 /dev/nbd0 00:18:37.781 04:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:37.782 04:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:37.782 04:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:37.782 04:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:37.782 04:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:37.782 04:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:37.782 04:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:37.782 04:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:37.782 04:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:37.782 04:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:37.782 04:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:37.782 1+0 records in 00:18:37.782 1+0 records out 00:18:37.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294948 s, 13.9 MB/s 00:18:37.782 04:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:37.782 04:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:37.782 04:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:37.782 04:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:37.782 04:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:37.782 04:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:37.782 04:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:37.782 04:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:37.782 04:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:37.782 04:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:37.782 04:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:18:38.349 512+0 records in 00:18:38.349 512+0 records out 00:18:38.349 100663296 bytes (101 MB, 96 MiB) copied, 0.698176 s, 144 MB/s 00:18:38.349 04:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:38.349 04:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:38.349 04:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:38.349 04:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:38.349 04:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:38.349 04:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:38.349 04:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:38.610 [2024-11-27 04:35:35.187566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.610 04:35:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:38.610 04:35:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:38.610 04:35:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:38.610 04:35:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:38.610 04:35:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:38.610 04:35:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:38.870 04:35:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:38.870 04:35:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:38.870 04:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:38.870 04:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.870 04:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.870 [2024-11-27 04:35:35.206721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:38.870 04:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.870 04:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:38.870 04:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.870 04:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.870 04:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:38.870 04:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:38.870 04:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:38.870 04:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.870 04:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.870 04:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.870 04:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.870 04:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.870 04:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.870 04:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.870 04:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.870 04:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.870 04:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.870 "name": "raid_bdev1", 00:18:38.870 "uuid": "afbf01d6-bed0-4bf6-b22a-54aa6e8a2017", 00:18:38.870 "strip_size_kb": 64, 00:18:38.870 "state": "online", 00:18:38.870 "raid_level": "raid5f", 00:18:38.870 "superblock": false, 00:18:38.870 "num_base_bdevs": 4, 00:18:38.870 "num_base_bdevs_discovered": 3, 00:18:38.870 "num_base_bdevs_operational": 3, 00:18:38.870 "base_bdevs_list": [ 00:18:38.870 { 00:18:38.870 "name": null, 00:18:38.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.870 "is_configured": false, 00:18:38.870 "data_offset": 0, 00:18:38.870 "data_size": 65536 00:18:38.870 }, 00:18:38.870 { 00:18:38.870 "name": "BaseBdev2", 00:18:38.871 "uuid": "02a660da-153e-58f6-af2a-8ab732934fc8", 00:18:38.871 "is_configured": true, 00:18:38.871 "data_offset": 0, 00:18:38.871 "data_size": 65536 00:18:38.871 }, 00:18:38.871 { 00:18:38.871 "name": "BaseBdev3", 00:18:38.871 "uuid": "f70a35d6-c4b5-5914-aa6c-a2fe6443674b", 00:18:38.871 "is_configured": true, 00:18:38.871 "data_offset": 0, 00:18:38.871 "data_size": 65536 00:18:38.871 }, 00:18:38.871 { 00:18:38.871 "name": "BaseBdev4", 00:18:38.871 "uuid": "00cc9646-0ed4-52e9-a63b-ee0deee624a5", 00:18:38.871 "is_configured": true, 00:18:38.871 "data_offset": 0, 00:18:38.871 "data_size": 65536 00:18:38.871 } 00:18:38.871 ] 00:18:38.871 }' 00:18:38.871 04:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.871 04:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.130 04:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:39.130 04:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.130 04:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.130 [2024-11-27 04:35:35.685914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:39.130 [2024-11-27 04:35:35.705348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:18:39.130 04:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.130 04:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:39.389 [2024-11-27 04:35:35.717695] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:40.329 04:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:40.329 04:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.329 04:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:40.329 04:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:40.329 04:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.329 04:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.329 04:35:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.329 04:35:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.329 04:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.329 04:35:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.329 04:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.329 "name": "raid_bdev1", 00:18:40.329 "uuid": "afbf01d6-bed0-4bf6-b22a-54aa6e8a2017", 00:18:40.329 "strip_size_kb": 64, 00:18:40.329 "state": "online", 00:18:40.329 "raid_level": "raid5f", 00:18:40.329 "superblock": false, 00:18:40.329 "num_base_bdevs": 4, 00:18:40.329 "num_base_bdevs_discovered": 4, 00:18:40.329 "num_base_bdevs_operational": 4, 00:18:40.329 "process": { 00:18:40.329 "type": "rebuild", 00:18:40.329 "target": "spare", 00:18:40.329 "progress": { 00:18:40.329 "blocks": 17280, 00:18:40.329 "percent": 8 00:18:40.329 } 00:18:40.329 }, 00:18:40.329 "base_bdevs_list": [ 00:18:40.329 { 00:18:40.329 "name": "spare", 00:18:40.329 "uuid": "282b2fc7-f031-5f49-8211-d4bd8cd052aa", 00:18:40.329 "is_configured": true, 00:18:40.329 "data_offset": 0, 00:18:40.329 "data_size": 65536 00:18:40.329 }, 00:18:40.329 { 00:18:40.329 "name": "BaseBdev2", 00:18:40.329 "uuid": "02a660da-153e-58f6-af2a-8ab732934fc8", 00:18:40.329 "is_configured": true, 00:18:40.329 "data_offset": 0, 00:18:40.329 "data_size": 65536 00:18:40.329 }, 00:18:40.329 { 00:18:40.329 "name": "BaseBdev3", 00:18:40.329 "uuid": "f70a35d6-c4b5-5914-aa6c-a2fe6443674b", 00:18:40.329 "is_configured": true, 00:18:40.329 "data_offset": 0, 00:18:40.329 "data_size": 65536 00:18:40.329 }, 00:18:40.329 { 00:18:40.329 "name": "BaseBdev4", 00:18:40.329 "uuid": "00cc9646-0ed4-52e9-a63b-ee0deee624a5", 00:18:40.329 "is_configured": true, 00:18:40.329 "data_offset": 0, 00:18:40.329 "data_size": 65536 00:18:40.329 } 00:18:40.329 ] 00:18:40.329 }' 00:18:40.329 04:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.329 04:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:40.329 04:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.329 04:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:40.329 04:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:40.329 04:35:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.329 04:35:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.329 [2024-11-27 04:35:36.873003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:40.588 [2024-11-27 04:35:36.927842] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:40.588 [2024-11-27 04:35:36.927925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.588 [2024-11-27 04:35:36.927948] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:40.588 [2024-11-27 04:35:36.927960] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:40.588 04:35:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.588 04:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:40.588 04:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.588 04:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.588 04:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:40.588 04:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:40.588 04:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:40.588 04:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.588 04:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.588 04:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.588 04:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.588 04:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.588 04:35:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.588 04:35:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.588 04:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.588 04:35:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.588 04:35:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.588 "name": "raid_bdev1", 00:18:40.588 "uuid": "afbf01d6-bed0-4bf6-b22a-54aa6e8a2017", 00:18:40.588 "strip_size_kb": 64, 00:18:40.588 "state": "online", 00:18:40.588 "raid_level": "raid5f", 00:18:40.588 "superblock": false, 00:18:40.588 "num_base_bdevs": 4, 00:18:40.588 "num_base_bdevs_discovered": 3, 00:18:40.588 "num_base_bdevs_operational": 3, 00:18:40.588 "base_bdevs_list": [ 00:18:40.588 { 00:18:40.588 "name": null, 00:18:40.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.588 "is_configured": false, 00:18:40.588 "data_offset": 0, 00:18:40.588 "data_size": 65536 00:18:40.588 }, 00:18:40.588 { 00:18:40.588 "name": "BaseBdev2", 00:18:40.588 "uuid": "02a660da-153e-58f6-af2a-8ab732934fc8", 00:18:40.588 "is_configured": true, 00:18:40.588 "data_offset": 0, 00:18:40.588 "data_size": 65536 00:18:40.588 }, 00:18:40.588 { 00:18:40.588 "name": "BaseBdev3", 00:18:40.588 "uuid": "f70a35d6-c4b5-5914-aa6c-a2fe6443674b", 00:18:40.588 "is_configured": true, 00:18:40.588 "data_offset": 0, 00:18:40.588 "data_size": 65536 00:18:40.588 }, 00:18:40.588 { 00:18:40.588 "name": "BaseBdev4", 00:18:40.588 "uuid": "00cc9646-0ed4-52e9-a63b-ee0deee624a5", 00:18:40.588 "is_configured": true, 00:18:40.588 "data_offset": 0, 00:18:40.588 "data_size": 65536 00:18:40.588 } 00:18:40.588 ] 00:18:40.588 }' 00:18:40.588 04:35:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.588 04:35:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.849 04:35:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:40.849 04:35:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.849 04:35:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:40.849 04:35:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:40.849 04:35:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.849 04:35:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.849 04:35:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.849 04:35:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.849 04:35:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.849 04:35:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.109 04:35:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.109 "name": "raid_bdev1", 00:18:41.109 "uuid": "afbf01d6-bed0-4bf6-b22a-54aa6e8a2017", 00:18:41.109 "strip_size_kb": 64, 00:18:41.109 "state": "online", 00:18:41.109 "raid_level": "raid5f", 00:18:41.109 "superblock": false, 00:18:41.109 "num_base_bdevs": 4, 00:18:41.109 "num_base_bdevs_discovered": 3, 00:18:41.109 "num_base_bdevs_operational": 3, 00:18:41.109 "base_bdevs_list": [ 00:18:41.109 { 00:18:41.109 "name": null, 00:18:41.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.109 "is_configured": false, 00:18:41.109 "data_offset": 0, 00:18:41.109 "data_size": 65536 00:18:41.109 }, 00:18:41.109 { 00:18:41.109 "name": "BaseBdev2", 00:18:41.109 "uuid": "02a660da-153e-58f6-af2a-8ab732934fc8", 00:18:41.109 "is_configured": true, 00:18:41.109 "data_offset": 0, 00:18:41.109 "data_size": 65536 00:18:41.109 }, 00:18:41.109 { 00:18:41.109 "name": "BaseBdev3", 00:18:41.109 "uuid": "f70a35d6-c4b5-5914-aa6c-a2fe6443674b", 00:18:41.109 "is_configured": true, 00:18:41.109 "data_offset": 0, 00:18:41.109 "data_size": 65536 00:18:41.109 }, 00:18:41.109 { 00:18:41.109 "name": "BaseBdev4", 00:18:41.109 "uuid": "00cc9646-0ed4-52e9-a63b-ee0deee624a5", 00:18:41.109 "is_configured": true, 00:18:41.109 "data_offset": 0, 00:18:41.109 "data_size": 65536 00:18:41.109 } 00:18:41.109 ] 00:18:41.109 }' 00:18:41.109 04:35:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.109 04:35:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:41.109 04:35:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.109 04:35:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:41.109 04:35:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:41.109 04:35:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.109 04:35:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.109 [2024-11-27 04:35:37.532163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:41.109 [2024-11-27 04:35:37.550896] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:18:41.109 04:35:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.109 04:35:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:41.109 [2024-11-27 04:35:37.562997] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:42.046 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.046 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.046 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.046 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.046 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.046 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.046 04:35:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.046 04:35:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.046 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.046 04:35:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.046 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.046 "name": "raid_bdev1", 00:18:42.046 "uuid": "afbf01d6-bed0-4bf6-b22a-54aa6e8a2017", 00:18:42.046 "strip_size_kb": 64, 00:18:42.046 "state": "online", 00:18:42.046 "raid_level": "raid5f", 00:18:42.046 "superblock": false, 00:18:42.046 "num_base_bdevs": 4, 00:18:42.046 "num_base_bdevs_discovered": 4, 00:18:42.046 "num_base_bdevs_operational": 4, 00:18:42.046 "process": { 00:18:42.046 "type": "rebuild", 00:18:42.046 "target": "spare", 00:18:42.046 "progress": { 00:18:42.046 "blocks": 17280, 00:18:42.046 "percent": 8 00:18:42.046 } 00:18:42.046 }, 00:18:42.046 "base_bdevs_list": [ 00:18:42.046 { 00:18:42.046 "name": "spare", 00:18:42.046 "uuid": "282b2fc7-f031-5f49-8211-d4bd8cd052aa", 00:18:42.046 "is_configured": true, 00:18:42.046 "data_offset": 0, 00:18:42.046 "data_size": 65536 00:18:42.046 }, 00:18:42.046 { 00:18:42.046 "name": "BaseBdev2", 00:18:42.046 "uuid": "02a660da-153e-58f6-af2a-8ab732934fc8", 00:18:42.046 "is_configured": true, 00:18:42.046 "data_offset": 0, 00:18:42.046 "data_size": 65536 00:18:42.046 }, 00:18:42.046 { 00:18:42.046 "name": "BaseBdev3", 00:18:42.046 "uuid": "f70a35d6-c4b5-5914-aa6c-a2fe6443674b", 00:18:42.046 "is_configured": true, 00:18:42.046 "data_offset": 0, 00:18:42.046 "data_size": 65536 00:18:42.046 }, 00:18:42.046 { 00:18:42.046 "name": "BaseBdev4", 00:18:42.046 "uuid": "00cc9646-0ed4-52e9-a63b-ee0deee624a5", 00:18:42.046 "is_configured": true, 00:18:42.046 "data_offset": 0, 00:18:42.046 "data_size": 65536 00:18:42.046 } 00:18:42.046 ] 00:18:42.046 }' 00:18:42.046 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.305 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.305 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.305 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.306 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:42.306 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:42.306 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:42.306 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=646 00:18:42.306 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:42.306 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.306 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.306 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.306 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.306 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.306 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.306 04:35:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.306 04:35:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.306 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.306 04:35:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.306 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.306 "name": "raid_bdev1", 00:18:42.306 "uuid": "afbf01d6-bed0-4bf6-b22a-54aa6e8a2017", 00:18:42.306 "strip_size_kb": 64, 00:18:42.306 "state": "online", 00:18:42.306 "raid_level": "raid5f", 00:18:42.306 "superblock": false, 00:18:42.306 "num_base_bdevs": 4, 00:18:42.306 "num_base_bdevs_discovered": 4, 00:18:42.306 "num_base_bdevs_operational": 4, 00:18:42.306 "process": { 00:18:42.306 "type": "rebuild", 00:18:42.306 "target": "spare", 00:18:42.306 "progress": { 00:18:42.306 "blocks": 21120, 00:18:42.306 "percent": 10 00:18:42.306 } 00:18:42.306 }, 00:18:42.306 "base_bdevs_list": [ 00:18:42.306 { 00:18:42.306 "name": "spare", 00:18:42.306 "uuid": "282b2fc7-f031-5f49-8211-d4bd8cd052aa", 00:18:42.306 "is_configured": true, 00:18:42.306 "data_offset": 0, 00:18:42.306 "data_size": 65536 00:18:42.306 }, 00:18:42.306 { 00:18:42.306 "name": "BaseBdev2", 00:18:42.306 "uuid": "02a660da-153e-58f6-af2a-8ab732934fc8", 00:18:42.306 "is_configured": true, 00:18:42.306 "data_offset": 0, 00:18:42.306 "data_size": 65536 00:18:42.306 }, 00:18:42.306 { 00:18:42.306 "name": "BaseBdev3", 00:18:42.306 "uuid": "f70a35d6-c4b5-5914-aa6c-a2fe6443674b", 00:18:42.306 "is_configured": true, 00:18:42.306 "data_offset": 0, 00:18:42.306 "data_size": 65536 00:18:42.306 }, 00:18:42.306 { 00:18:42.306 "name": "BaseBdev4", 00:18:42.306 "uuid": "00cc9646-0ed4-52e9-a63b-ee0deee624a5", 00:18:42.306 "is_configured": true, 00:18:42.306 "data_offset": 0, 00:18:42.306 "data_size": 65536 00:18:42.306 } 00:18:42.306 ] 00:18:42.306 }' 00:18:42.306 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.306 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.306 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.306 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.306 04:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:43.685 04:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:43.685 04:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.685 04:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.685 04:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.685 04:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.685 04:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.685 04:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.685 04:35:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.685 04:35:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.685 04:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.685 04:35:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.685 04:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.685 "name": "raid_bdev1", 00:18:43.685 "uuid": "afbf01d6-bed0-4bf6-b22a-54aa6e8a2017", 00:18:43.685 "strip_size_kb": 64, 00:18:43.685 "state": "online", 00:18:43.685 "raid_level": "raid5f", 00:18:43.685 "superblock": false, 00:18:43.685 "num_base_bdevs": 4, 00:18:43.686 "num_base_bdevs_discovered": 4, 00:18:43.686 "num_base_bdevs_operational": 4, 00:18:43.686 "process": { 00:18:43.686 "type": "rebuild", 00:18:43.686 "target": "spare", 00:18:43.686 "progress": { 00:18:43.686 "blocks": 42240, 00:18:43.686 "percent": 21 00:18:43.686 } 00:18:43.686 }, 00:18:43.686 "base_bdevs_list": [ 00:18:43.686 { 00:18:43.686 "name": "spare", 00:18:43.686 "uuid": "282b2fc7-f031-5f49-8211-d4bd8cd052aa", 00:18:43.686 "is_configured": true, 00:18:43.686 "data_offset": 0, 00:18:43.686 "data_size": 65536 00:18:43.686 }, 00:18:43.686 { 00:18:43.686 "name": "BaseBdev2", 00:18:43.686 "uuid": "02a660da-153e-58f6-af2a-8ab732934fc8", 00:18:43.686 "is_configured": true, 00:18:43.686 "data_offset": 0, 00:18:43.686 "data_size": 65536 00:18:43.686 }, 00:18:43.686 { 00:18:43.686 "name": "BaseBdev3", 00:18:43.686 "uuid": "f70a35d6-c4b5-5914-aa6c-a2fe6443674b", 00:18:43.686 "is_configured": true, 00:18:43.686 "data_offset": 0, 00:18:43.686 "data_size": 65536 00:18:43.686 }, 00:18:43.686 { 00:18:43.686 "name": "BaseBdev4", 00:18:43.686 "uuid": "00cc9646-0ed4-52e9-a63b-ee0deee624a5", 00:18:43.686 "is_configured": true, 00:18:43.686 "data_offset": 0, 00:18:43.686 "data_size": 65536 00:18:43.686 } 00:18:43.686 ] 00:18:43.686 }' 00:18:43.686 04:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.686 04:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:43.686 04:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.686 04:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.686 04:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:44.622 04:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:44.622 04:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.622 04:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.622 04:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:44.622 04:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:44.622 04:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.622 04:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.622 04:35:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.622 04:35:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.622 04:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.622 04:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.622 04:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.622 "name": "raid_bdev1", 00:18:44.622 "uuid": "afbf01d6-bed0-4bf6-b22a-54aa6e8a2017", 00:18:44.622 "strip_size_kb": 64, 00:18:44.622 "state": "online", 00:18:44.622 "raid_level": "raid5f", 00:18:44.622 "superblock": false, 00:18:44.622 "num_base_bdevs": 4, 00:18:44.622 "num_base_bdevs_discovered": 4, 00:18:44.622 "num_base_bdevs_operational": 4, 00:18:44.622 "process": { 00:18:44.622 "type": "rebuild", 00:18:44.622 "target": "spare", 00:18:44.622 "progress": { 00:18:44.622 "blocks": 65280, 00:18:44.622 "percent": 33 00:18:44.622 } 00:18:44.622 }, 00:18:44.622 "base_bdevs_list": [ 00:18:44.622 { 00:18:44.622 "name": "spare", 00:18:44.622 "uuid": "282b2fc7-f031-5f49-8211-d4bd8cd052aa", 00:18:44.622 "is_configured": true, 00:18:44.622 "data_offset": 0, 00:18:44.622 "data_size": 65536 00:18:44.622 }, 00:18:44.622 { 00:18:44.622 "name": "BaseBdev2", 00:18:44.622 "uuid": "02a660da-153e-58f6-af2a-8ab732934fc8", 00:18:44.622 "is_configured": true, 00:18:44.622 "data_offset": 0, 00:18:44.622 "data_size": 65536 00:18:44.622 }, 00:18:44.622 { 00:18:44.622 "name": "BaseBdev3", 00:18:44.622 "uuid": "f70a35d6-c4b5-5914-aa6c-a2fe6443674b", 00:18:44.622 "is_configured": true, 00:18:44.622 "data_offset": 0, 00:18:44.622 "data_size": 65536 00:18:44.622 }, 00:18:44.622 { 00:18:44.622 "name": "BaseBdev4", 00:18:44.622 "uuid": "00cc9646-0ed4-52e9-a63b-ee0deee624a5", 00:18:44.622 "is_configured": true, 00:18:44.622 "data_offset": 0, 00:18:44.622 "data_size": 65536 00:18:44.622 } 00:18:44.622 ] 00:18:44.622 }' 00:18:44.622 04:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.622 04:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:44.622 04:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.622 04:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:44.622 04:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:45.569 04:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:45.569 04:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:45.569 04:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.569 04:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:45.569 04:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:45.569 04:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.569 04:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.569 04:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.569 04:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.569 04:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.828 04:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.828 04:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.828 "name": "raid_bdev1", 00:18:45.828 "uuid": "afbf01d6-bed0-4bf6-b22a-54aa6e8a2017", 00:18:45.828 "strip_size_kb": 64, 00:18:45.828 "state": "online", 00:18:45.828 "raid_level": "raid5f", 00:18:45.828 "superblock": false, 00:18:45.828 "num_base_bdevs": 4, 00:18:45.828 "num_base_bdevs_discovered": 4, 00:18:45.828 "num_base_bdevs_operational": 4, 00:18:45.828 "process": { 00:18:45.828 "type": "rebuild", 00:18:45.828 "target": "spare", 00:18:45.828 "progress": { 00:18:45.828 "blocks": 86400, 00:18:45.828 "percent": 43 00:18:45.828 } 00:18:45.828 }, 00:18:45.828 "base_bdevs_list": [ 00:18:45.828 { 00:18:45.828 "name": "spare", 00:18:45.828 "uuid": "282b2fc7-f031-5f49-8211-d4bd8cd052aa", 00:18:45.828 "is_configured": true, 00:18:45.828 "data_offset": 0, 00:18:45.828 "data_size": 65536 00:18:45.828 }, 00:18:45.828 { 00:18:45.828 "name": "BaseBdev2", 00:18:45.828 "uuid": "02a660da-153e-58f6-af2a-8ab732934fc8", 00:18:45.828 "is_configured": true, 00:18:45.828 "data_offset": 0, 00:18:45.828 "data_size": 65536 00:18:45.828 }, 00:18:45.828 { 00:18:45.828 "name": "BaseBdev3", 00:18:45.828 "uuid": "f70a35d6-c4b5-5914-aa6c-a2fe6443674b", 00:18:45.828 "is_configured": true, 00:18:45.828 "data_offset": 0, 00:18:45.828 "data_size": 65536 00:18:45.828 }, 00:18:45.828 { 00:18:45.828 "name": "BaseBdev4", 00:18:45.828 "uuid": "00cc9646-0ed4-52e9-a63b-ee0deee624a5", 00:18:45.828 "is_configured": true, 00:18:45.828 "data_offset": 0, 00:18:45.828 "data_size": 65536 00:18:45.828 } 00:18:45.828 ] 00:18:45.828 }' 00:18:45.828 04:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.828 04:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:45.828 04:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.828 04:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:45.828 04:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:46.768 04:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:46.768 04:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:46.768 04:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:46.768 04:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:46.768 04:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:46.768 04:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:46.768 04:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.768 04:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.768 04:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.768 04:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.768 04:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.027 04:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.027 "name": "raid_bdev1", 00:18:47.027 "uuid": "afbf01d6-bed0-4bf6-b22a-54aa6e8a2017", 00:18:47.027 "strip_size_kb": 64, 00:18:47.027 "state": "online", 00:18:47.027 "raid_level": "raid5f", 00:18:47.027 "superblock": false, 00:18:47.027 "num_base_bdevs": 4, 00:18:47.027 "num_base_bdevs_discovered": 4, 00:18:47.027 "num_base_bdevs_operational": 4, 00:18:47.027 "process": { 00:18:47.027 "type": "rebuild", 00:18:47.027 "target": "spare", 00:18:47.027 "progress": { 00:18:47.027 "blocks": 109440, 00:18:47.027 "percent": 55 00:18:47.027 } 00:18:47.027 }, 00:18:47.027 "base_bdevs_list": [ 00:18:47.027 { 00:18:47.027 "name": "spare", 00:18:47.027 "uuid": "282b2fc7-f031-5f49-8211-d4bd8cd052aa", 00:18:47.027 "is_configured": true, 00:18:47.027 "data_offset": 0, 00:18:47.027 "data_size": 65536 00:18:47.027 }, 00:18:47.027 { 00:18:47.027 "name": "BaseBdev2", 00:18:47.027 "uuid": "02a660da-153e-58f6-af2a-8ab732934fc8", 00:18:47.027 "is_configured": true, 00:18:47.027 "data_offset": 0, 00:18:47.027 "data_size": 65536 00:18:47.027 }, 00:18:47.027 { 00:18:47.027 "name": "BaseBdev3", 00:18:47.027 "uuid": "f70a35d6-c4b5-5914-aa6c-a2fe6443674b", 00:18:47.027 "is_configured": true, 00:18:47.027 "data_offset": 0, 00:18:47.027 "data_size": 65536 00:18:47.027 }, 00:18:47.027 { 00:18:47.027 "name": "BaseBdev4", 00:18:47.027 "uuid": "00cc9646-0ed4-52e9-a63b-ee0deee624a5", 00:18:47.027 "is_configured": true, 00:18:47.027 "data_offset": 0, 00:18:47.027 "data_size": 65536 00:18:47.027 } 00:18:47.027 ] 00:18:47.027 }' 00:18:47.027 04:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.027 04:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:47.027 04:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.027 04:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:47.027 04:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:47.964 04:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:47.964 04:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:47.964 04:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.964 04:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:47.964 04:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:47.964 04:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.964 04:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.964 04:35:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.964 04:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.964 04:35:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.964 04:35:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.964 04:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.964 "name": "raid_bdev1", 00:18:47.964 "uuid": "afbf01d6-bed0-4bf6-b22a-54aa6e8a2017", 00:18:47.964 "strip_size_kb": 64, 00:18:47.964 "state": "online", 00:18:47.964 "raid_level": "raid5f", 00:18:47.964 "superblock": false, 00:18:47.964 "num_base_bdevs": 4, 00:18:47.964 "num_base_bdevs_discovered": 4, 00:18:47.964 "num_base_bdevs_operational": 4, 00:18:47.964 "process": { 00:18:47.964 "type": "rebuild", 00:18:47.964 "target": "spare", 00:18:47.964 "progress": { 00:18:47.964 "blocks": 130560, 00:18:47.964 "percent": 66 00:18:47.964 } 00:18:47.964 }, 00:18:47.964 "base_bdevs_list": [ 00:18:47.964 { 00:18:47.964 "name": "spare", 00:18:47.964 "uuid": "282b2fc7-f031-5f49-8211-d4bd8cd052aa", 00:18:47.964 "is_configured": true, 00:18:47.964 "data_offset": 0, 00:18:47.964 "data_size": 65536 00:18:47.964 }, 00:18:47.964 { 00:18:47.964 "name": "BaseBdev2", 00:18:47.964 "uuid": "02a660da-153e-58f6-af2a-8ab732934fc8", 00:18:47.964 "is_configured": true, 00:18:47.964 "data_offset": 0, 00:18:47.964 "data_size": 65536 00:18:47.964 }, 00:18:47.964 { 00:18:47.964 "name": "BaseBdev3", 00:18:47.964 "uuid": "f70a35d6-c4b5-5914-aa6c-a2fe6443674b", 00:18:47.964 "is_configured": true, 00:18:47.964 "data_offset": 0, 00:18:47.964 "data_size": 65536 00:18:47.964 }, 00:18:47.964 { 00:18:47.964 "name": "BaseBdev4", 00:18:47.964 "uuid": "00cc9646-0ed4-52e9-a63b-ee0deee624a5", 00:18:47.964 "is_configured": true, 00:18:47.964 "data_offset": 0, 00:18:47.964 "data_size": 65536 00:18:47.964 } 00:18:47.964 ] 00:18:47.964 }' 00:18:47.964 04:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:48.223 04:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:48.223 04:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:48.223 04:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:48.223 04:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:49.158 04:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:49.158 04:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:49.158 04:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.158 04:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:49.158 04:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:49.158 04:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.158 04:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.158 04:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.158 04:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.158 04:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.158 04:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.158 04:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.158 "name": "raid_bdev1", 00:18:49.158 "uuid": "afbf01d6-bed0-4bf6-b22a-54aa6e8a2017", 00:18:49.158 "strip_size_kb": 64, 00:18:49.158 "state": "online", 00:18:49.158 "raid_level": "raid5f", 00:18:49.158 "superblock": false, 00:18:49.158 "num_base_bdevs": 4, 00:18:49.158 "num_base_bdevs_discovered": 4, 00:18:49.158 "num_base_bdevs_operational": 4, 00:18:49.158 "process": { 00:18:49.158 "type": "rebuild", 00:18:49.158 "target": "spare", 00:18:49.158 "progress": { 00:18:49.158 "blocks": 153600, 00:18:49.158 "percent": 78 00:18:49.158 } 00:18:49.158 }, 00:18:49.158 "base_bdevs_list": [ 00:18:49.158 { 00:18:49.158 "name": "spare", 00:18:49.158 "uuid": "282b2fc7-f031-5f49-8211-d4bd8cd052aa", 00:18:49.158 "is_configured": true, 00:18:49.158 "data_offset": 0, 00:18:49.158 "data_size": 65536 00:18:49.158 }, 00:18:49.158 { 00:18:49.158 "name": "BaseBdev2", 00:18:49.158 "uuid": "02a660da-153e-58f6-af2a-8ab732934fc8", 00:18:49.158 "is_configured": true, 00:18:49.158 "data_offset": 0, 00:18:49.158 "data_size": 65536 00:18:49.158 }, 00:18:49.158 { 00:18:49.158 "name": "BaseBdev3", 00:18:49.158 "uuid": "f70a35d6-c4b5-5914-aa6c-a2fe6443674b", 00:18:49.158 "is_configured": true, 00:18:49.158 "data_offset": 0, 00:18:49.158 "data_size": 65536 00:18:49.158 }, 00:18:49.158 { 00:18:49.158 "name": "BaseBdev4", 00:18:49.158 "uuid": "00cc9646-0ed4-52e9-a63b-ee0deee624a5", 00:18:49.158 "is_configured": true, 00:18:49.158 "data_offset": 0, 00:18:49.158 "data_size": 65536 00:18:49.158 } 00:18:49.158 ] 00:18:49.158 }' 00:18:49.158 04:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.158 04:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:49.158 04:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.416 04:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.416 04:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:50.352 04:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:50.352 04:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:50.352 04:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.352 04:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:50.352 04:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:50.352 04:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.352 04:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.352 04:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.352 04:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.352 04:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.352 04:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.352 04:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.352 "name": "raid_bdev1", 00:18:50.352 "uuid": "afbf01d6-bed0-4bf6-b22a-54aa6e8a2017", 00:18:50.352 "strip_size_kb": 64, 00:18:50.352 "state": "online", 00:18:50.352 "raid_level": "raid5f", 00:18:50.352 "superblock": false, 00:18:50.352 "num_base_bdevs": 4, 00:18:50.352 "num_base_bdevs_discovered": 4, 00:18:50.352 "num_base_bdevs_operational": 4, 00:18:50.352 "process": { 00:18:50.352 "type": "rebuild", 00:18:50.352 "target": "spare", 00:18:50.352 "progress": { 00:18:50.352 "blocks": 174720, 00:18:50.352 "percent": 88 00:18:50.352 } 00:18:50.352 }, 00:18:50.352 "base_bdevs_list": [ 00:18:50.352 { 00:18:50.352 "name": "spare", 00:18:50.352 "uuid": "282b2fc7-f031-5f49-8211-d4bd8cd052aa", 00:18:50.352 "is_configured": true, 00:18:50.352 "data_offset": 0, 00:18:50.352 "data_size": 65536 00:18:50.352 }, 00:18:50.352 { 00:18:50.352 "name": "BaseBdev2", 00:18:50.352 "uuid": "02a660da-153e-58f6-af2a-8ab732934fc8", 00:18:50.352 "is_configured": true, 00:18:50.352 "data_offset": 0, 00:18:50.352 "data_size": 65536 00:18:50.352 }, 00:18:50.352 { 00:18:50.352 "name": "BaseBdev3", 00:18:50.352 "uuid": "f70a35d6-c4b5-5914-aa6c-a2fe6443674b", 00:18:50.352 "is_configured": true, 00:18:50.352 "data_offset": 0, 00:18:50.352 "data_size": 65536 00:18:50.352 }, 00:18:50.352 { 00:18:50.352 "name": "BaseBdev4", 00:18:50.352 "uuid": "00cc9646-0ed4-52e9-a63b-ee0deee624a5", 00:18:50.352 "is_configured": true, 00:18:50.352 "data_offset": 0, 00:18:50.352 "data_size": 65536 00:18:50.352 } 00:18:50.352 ] 00:18:50.352 }' 00:18:50.352 04:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.352 04:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:50.352 04:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.352 04:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:50.352 04:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:51.730 04:35:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:51.730 04:35:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.730 04:35:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.730 04:35:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.730 04:35:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.730 04:35:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.730 04:35:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.730 04:35:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.730 04:35:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.730 04:35:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.730 04:35:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.730 [2024-11-27 04:35:47.943195] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:51.730 [2024-11-27 04:35:47.943296] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:51.730 [2024-11-27 04:35:47.943352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.730 04:35:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.730 "name": "raid_bdev1", 00:18:51.730 "uuid": "afbf01d6-bed0-4bf6-b22a-54aa6e8a2017", 00:18:51.730 "strip_size_kb": 64, 00:18:51.730 "state": "online", 00:18:51.730 "raid_level": "raid5f", 00:18:51.730 "superblock": false, 00:18:51.730 "num_base_bdevs": 4, 00:18:51.730 "num_base_bdevs_discovered": 4, 00:18:51.730 "num_base_bdevs_operational": 4, 00:18:51.730 "process": { 00:18:51.730 "type": "rebuild", 00:18:51.730 "target": "spare", 00:18:51.730 "progress": { 00:18:51.730 "blocks": 195840, 00:18:51.730 "percent": 99 00:18:51.730 } 00:18:51.730 }, 00:18:51.730 "base_bdevs_list": [ 00:18:51.730 { 00:18:51.730 "name": "spare", 00:18:51.730 "uuid": "282b2fc7-f031-5f49-8211-d4bd8cd052aa", 00:18:51.730 "is_configured": true, 00:18:51.730 "data_offset": 0, 00:18:51.730 "data_size": 65536 00:18:51.730 }, 00:18:51.730 { 00:18:51.730 "name": "BaseBdev2", 00:18:51.730 "uuid": "02a660da-153e-58f6-af2a-8ab732934fc8", 00:18:51.730 "is_configured": true, 00:18:51.730 "data_offset": 0, 00:18:51.730 "data_size": 65536 00:18:51.730 }, 00:18:51.730 { 00:18:51.730 "name": "BaseBdev3", 00:18:51.730 "uuid": "f70a35d6-c4b5-5914-aa6c-a2fe6443674b", 00:18:51.730 "is_configured": true, 00:18:51.730 "data_offset": 0, 00:18:51.730 "data_size": 65536 00:18:51.730 }, 00:18:51.730 { 00:18:51.730 "name": "BaseBdev4", 00:18:51.730 "uuid": "00cc9646-0ed4-52e9-a63b-ee0deee624a5", 00:18:51.730 "is_configured": true, 00:18:51.730 "data_offset": 0, 00:18:51.730 "data_size": 65536 00:18:51.730 } 00:18:51.730 ] 00:18:51.730 }' 00:18:51.730 04:35:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.730 04:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.730 04:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.730 04:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.730 04:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.666 "name": "raid_bdev1", 00:18:52.666 "uuid": "afbf01d6-bed0-4bf6-b22a-54aa6e8a2017", 00:18:52.666 "strip_size_kb": 64, 00:18:52.666 "state": "online", 00:18:52.666 "raid_level": "raid5f", 00:18:52.666 "superblock": false, 00:18:52.666 "num_base_bdevs": 4, 00:18:52.666 "num_base_bdevs_discovered": 4, 00:18:52.666 "num_base_bdevs_operational": 4, 00:18:52.666 "base_bdevs_list": [ 00:18:52.666 { 00:18:52.666 "name": "spare", 00:18:52.666 "uuid": "282b2fc7-f031-5f49-8211-d4bd8cd052aa", 00:18:52.666 "is_configured": true, 00:18:52.666 "data_offset": 0, 00:18:52.666 "data_size": 65536 00:18:52.666 }, 00:18:52.666 { 00:18:52.666 "name": "BaseBdev2", 00:18:52.666 "uuid": "02a660da-153e-58f6-af2a-8ab732934fc8", 00:18:52.666 "is_configured": true, 00:18:52.666 "data_offset": 0, 00:18:52.666 "data_size": 65536 00:18:52.666 }, 00:18:52.666 { 00:18:52.666 "name": "BaseBdev3", 00:18:52.666 "uuid": "f70a35d6-c4b5-5914-aa6c-a2fe6443674b", 00:18:52.666 "is_configured": true, 00:18:52.666 "data_offset": 0, 00:18:52.666 "data_size": 65536 00:18:52.666 }, 00:18:52.666 { 00:18:52.666 "name": "BaseBdev4", 00:18:52.666 "uuid": "00cc9646-0ed4-52e9-a63b-ee0deee624a5", 00:18:52.666 "is_configured": true, 00:18:52.666 "data_offset": 0, 00:18:52.666 "data_size": 65536 00:18:52.666 } 00:18:52.666 ] 00:18:52.666 }' 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.666 04:35:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.925 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.925 "name": "raid_bdev1", 00:18:52.925 "uuid": "afbf01d6-bed0-4bf6-b22a-54aa6e8a2017", 00:18:52.925 "strip_size_kb": 64, 00:18:52.925 "state": "online", 00:18:52.925 "raid_level": "raid5f", 00:18:52.925 "superblock": false, 00:18:52.925 "num_base_bdevs": 4, 00:18:52.925 "num_base_bdevs_discovered": 4, 00:18:52.925 "num_base_bdevs_operational": 4, 00:18:52.925 "base_bdevs_list": [ 00:18:52.925 { 00:18:52.925 "name": "spare", 00:18:52.925 "uuid": "282b2fc7-f031-5f49-8211-d4bd8cd052aa", 00:18:52.925 "is_configured": true, 00:18:52.925 "data_offset": 0, 00:18:52.925 "data_size": 65536 00:18:52.925 }, 00:18:52.925 { 00:18:52.925 "name": "BaseBdev2", 00:18:52.925 "uuid": "02a660da-153e-58f6-af2a-8ab732934fc8", 00:18:52.925 "is_configured": true, 00:18:52.925 "data_offset": 0, 00:18:52.925 "data_size": 65536 00:18:52.925 }, 00:18:52.925 { 00:18:52.925 "name": "BaseBdev3", 00:18:52.925 "uuid": "f70a35d6-c4b5-5914-aa6c-a2fe6443674b", 00:18:52.925 "is_configured": true, 00:18:52.925 "data_offset": 0, 00:18:52.925 "data_size": 65536 00:18:52.925 }, 00:18:52.925 { 00:18:52.925 "name": "BaseBdev4", 00:18:52.925 "uuid": "00cc9646-0ed4-52e9-a63b-ee0deee624a5", 00:18:52.925 "is_configured": true, 00:18:52.925 "data_offset": 0, 00:18:52.925 "data_size": 65536 00:18:52.925 } 00:18:52.925 ] 00:18:52.925 }' 00:18:52.925 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.925 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:52.925 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.925 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:52.925 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:52.925 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.925 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.925 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:52.925 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:52.925 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:52.925 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.925 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.925 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.925 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.925 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.925 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.926 04:35:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.926 04:35:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.926 04:35:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.926 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.926 "name": "raid_bdev1", 00:18:52.926 "uuid": "afbf01d6-bed0-4bf6-b22a-54aa6e8a2017", 00:18:52.926 "strip_size_kb": 64, 00:18:52.926 "state": "online", 00:18:52.926 "raid_level": "raid5f", 00:18:52.926 "superblock": false, 00:18:52.926 "num_base_bdevs": 4, 00:18:52.926 "num_base_bdevs_discovered": 4, 00:18:52.926 "num_base_bdevs_operational": 4, 00:18:52.926 "base_bdevs_list": [ 00:18:52.926 { 00:18:52.926 "name": "spare", 00:18:52.926 "uuid": "282b2fc7-f031-5f49-8211-d4bd8cd052aa", 00:18:52.926 "is_configured": true, 00:18:52.926 "data_offset": 0, 00:18:52.926 "data_size": 65536 00:18:52.926 }, 00:18:52.926 { 00:18:52.926 "name": "BaseBdev2", 00:18:52.926 "uuid": "02a660da-153e-58f6-af2a-8ab732934fc8", 00:18:52.926 "is_configured": true, 00:18:52.926 "data_offset": 0, 00:18:52.926 "data_size": 65536 00:18:52.926 }, 00:18:52.926 { 00:18:52.926 "name": "BaseBdev3", 00:18:52.926 "uuid": "f70a35d6-c4b5-5914-aa6c-a2fe6443674b", 00:18:52.926 "is_configured": true, 00:18:52.926 "data_offset": 0, 00:18:52.926 "data_size": 65536 00:18:52.926 }, 00:18:52.926 { 00:18:52.926 "name": "BaseBdev4", 00:18:52.926 "uuid": "00cc9646-0ed4-52e9-a63b-ee0deee624a5", 00:18:52.926 "is_configured": true, 00:18:52.926 "data_offset": 0, 00:18:52.926 "data_size": 65536 00:18:52.926 } 00:18:52.926 ] 00:18:52.926 }' 00:18:52.926 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.926 04:35:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.493 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:53.493 04:35:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.493 04:35:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.493 [2024-11-27 04:35:49.797224] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:53.493 [2024-11-27 04:35:49.797269] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:53.493 [2024-11-27 04:35:49.797378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:53.493 [2024-11-27 04:35:49.797527] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:53.493 [2024-11-27 04:35:49.797549] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:53.493 04:35:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.493 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.493 04:35:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.493 04:35:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.493 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:53.493 04:35:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.493 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:53.493 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:53.493 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:53.493 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:53.493 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:53.493 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:53.493 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:53.493 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:53.493 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:53.493 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:53.493 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:53.493 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:53.493 04:35:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:53.752 /dev/nbd0 00:18:53.752 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:53.752 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:53.752 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:53.752 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:53.752 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:53.752 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:53.752 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:53.752 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:53.752 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:53.752 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:53.752 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:53.752 1+0 records in 00:18:53.752 1+0 records out 00:18:53.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352792 s, 11.6 MB/s 00:18:53.752 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:53.752 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:53.752 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:53.752 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:53.752 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:53.752 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:53.752 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:53.752 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:54.010 /dev/nbd1 00:18:54.010 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:54.010 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:54.010 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:54.010 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:54.010 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:54.010 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:54.010 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:54.010 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:54.010 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:54.010 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:54.010 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:54.010 1+0 records in 00:18:54.010 1+0 records out 00:18:54.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275203 s, 14.9 MB/s 00:18:54.010 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:54.010 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:54.010 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:54.010 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:54.010 04:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:54.010 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:54.010 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:54.010 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:54.324 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:54.324 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:54.324 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:54.324 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:54.324 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:54.324 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:54.324 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:54.324 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:54.324 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:54.324 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:54.324 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:54.324 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:54.324 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:54.599 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:54.599 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:54.599 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:54.599 04:35:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:54.599 04:35:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:54.599 04:35:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:54.599 04:35:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:54.599 04:35:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:54.599 04:35:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:54.599 04:35:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:54.599 04:35:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:54.599 04:35:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:54.599 04:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:54.599 04:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84965 00:18:54.599 04:35:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84965 ']' 00:18:54.599 04:35:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84965 00:18:54.599 04:35:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:18:54.599 04:35:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.599 04:35:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84965 00:18:54.885 killing process with pid 84965 00:18:54.885 Received shutdown signal, test time was about 60.000000 seconds 00:18:54.885 00:18:54.885 Latency(us) 00:18:54.885 [2024-11-27T04:35:51.472Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.885 [2024-11-27T04:35:51.472Z] =================================================================================================================== 00:18:54.885 [2024-11-27T04:35:51.472Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:54.885 04:35:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:54.885 04:35:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:54.885 04:35:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84965' 00:18:54.885 04:35:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84965 00:18:54.885 [2024-11-27 04:35:51.192806] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:54.885 04:35:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84965 00:18:55.457 [2024-11-27 04:35:51.776113] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:56.834 ************************************ 00:18:56.834 END TEST raid5f_rebuild_test 00:18:56.834 ************************************ 00:18:56.834 04:35:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:56.834 00:18:56.834 real 0m20.999s 00:18:56.834 user 0m25.082s 00:18:56.834 sys 0m2.538s 00:18:56.834 04:35:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.834 04:35:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.834 04:35:53 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:18:56.834 04:35:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:56.834 04:35:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:56.834 04:35:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:56.834 ************************************ 00:18:56.834 START TEST raid5f_rebuild_test_sb 00:18:56.834 ************************************ 00:18:56.834 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:18:56.834 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85497 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85497 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85497 ']' 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.835 04:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.835 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:56.835 Zero copy mechanism will not be used. 00:18:56.835 [2024-11-27 04:35:53.188747] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:56.835 [2024-11-27 04:35:53.188891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85497 ] 00:18:56.835 [2024-11-27 04:35:53.342074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.094 [2024-11-27 04:35:53.461590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.094 [2024-11-27 04:35:53.661461] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:57.094 [2024-11-27 04:35:53.661526] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.662 BaseBdev1_malloc 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.662 [2024-11-27 04:35:54.082273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:57.662 [2024-11-27 04:35:54.082340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.662 [2024-11-27 04:35:54.082363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:57.662 [2024-11-27 04:35:54.082374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.662 [2024-11-27 04:35:54.084733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.662 [2024-11-27 04:35:54.084778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:57.662 BaseBdev1 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.662 BaseBdev2_malloc 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.662 [2024-11-27 04:35:54.139891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:57.662 [2024-11-27 04:35:54.139965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.662 [2024-11-27 04:35:54.140007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:57.662 [2024-11-27 04:35:54.140019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.662 [2024-11-27 04:35:54.142420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.662 [2024-11-27 04:35:54.142462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:57.662 BaseBdev2 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.662 BaseBdev3_malloc 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.662 [2024-11-27 04:35:54.211436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:57.662 [2024-11-27 04:35:54.211528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.662 [2024-11-27 04:35:54.211555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:57.662 [2024-11-27 04:35:54.211568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.662 [2024-11-27 04:35:54.213841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.662 [2024-11-27 04:35:54.213885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:57.662 BaseBdev3 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:57.662 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:57.663 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.663 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.923 BaseBdev4_malloc 00:18:57.923 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.923 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:57.923 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.923 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.923 [2024-11-27 04:35:54.271637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:57.923 [2024-11-27 04:35:54.271701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.923 [2024-11-27 04:35:54.271726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:57.923 [2024-11-27 04:35:54.271739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.923 [2024-11-27 04:35:54.274181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.923 [2024-11-27 04:35:54.274228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:57.923 BaseBdev4 00:18:57.923 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.923 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:57.923 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.923 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.923 spare_malloc 00:18:57.923 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.923 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:57.923 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.923 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.923 spare_delay 00:18:57.923 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.923 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:57.923 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.923 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.923 [2024-11-27 04:35:54.342845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:57.923 [2024-11-27 04:35:54.342929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.923 [2024-11-27 04:35:54.342951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:57.923 [2024-11-27 04:35:54.342966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.923 [2024-11-27 04:35:54.346125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.923 [2024-11-27 04:35:54.346168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:57.923 spare 00:18:57.923 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.923 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:57.923 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.923 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.923 [2024-11-27 04:35:54.351029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:57.923 [2024-11-27 04:35:54.353657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:57.923 [2024-11-27 04:35:54.353754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:57.923 [2024-11-27 04:35:54.353818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:57.923 [2024-11-27 04:35:54.354047] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:57.923 [2024-11-27 04:35:54.354073] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:57.923 [2024-11-27 04:35:54.354447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:57.923 [2024-11-27 04:35:54.364580] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:57.924 [2024-11-27 04:35:54.364622] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:57.924 [2024-11-27 04:35:54.364936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:57.924 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.924 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:57.924 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.924 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.924 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:57.924 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:57.924 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:57.924 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.924 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.924 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.924 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.924 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.924 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.924 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.924 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.924 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.924 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.924 "name": "raid_bdev1", 00:18:57.924 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:18:57.924 "strip_size_kb": 64, 00:18:57.924 "state": "online", 00:18:57.924 "raid_level": "raid5f", 00:18:57.924 "superblock": true, 00:18:57.924 "num_base_bdevs": 4, 00:18:57.924 "num_base_bdevs_discovered": 4, 00:18:57.924 "num_base_bdevs_operational": 4, 00:18:57.924 "base_bdevs_list": [ 00:18:57.924 { 00:18:57.924 "name": "BaseBdev1", 00:18:57.924 "uuid": "dd06fc67-f8e6-5eed-98e1-a226af735c37", 00:18:57.924 "is_configured": true, 00:18:57.924 "data_offset": 2048, 00:18:57.924 "data_size": 63488 00:18:57.924 }, 00:18:57.924 { 00:18:57.924 "name": "BaseBdev2", 00:18:57.924 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:18:57.924 "is_configured": true, 00:18:57.924 "data_offset": 2048, 00:18:57.924 "data_size": 63488 00:18:57.924 }, 00:18:57.924 { 00:18:57.924 "name": "BaseBdev3", 00:18:57.924 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:18:57.924 "is_configured": true, 00:18:57.924 "data_offset": 2048, 00:18:57.924 "data_size": 63488 00:18:57.924 }, 00:18:57.924 { 00:18:57.924 "name": "BaseBdev4", 00:18:57.924 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:18:57.924 "is_configured": true, 00:18:57.924 "data_offset": 2048, 00:18:57.924 "data_size": 63488 00:18:57.924 } 00:18:57.924 ] 00:18:57.924 }' 00:18:57.924 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.924 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.493 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:58.493 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:58.493 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.493 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.493 [2024-11-27 04:35:54.834876] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:58.493 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.493 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:18:58.493 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.493 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.493 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.493 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:58.493 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.493 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:58.493 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:58.493 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:58.493 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:58.493 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:58.493 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:58.493 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:58.493 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:58.493 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:58.493 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:58.493 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:58.493 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:58.493 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:58.493 04:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:58.753 [2024-11-27 04:35:55.134204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:58.753 /dev/nbd0 00:18:58.753 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:58.753 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:58.753 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:58.753 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:58.753 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:58.753 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:58.753 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:58.753 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:58.753 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:58.753 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:58.753 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:58.753 1+0 records in 00:18:58.753 1+0 records out 00:18:58.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584427 s, 7.0 MB/s 00:18:58.753 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:58.753 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:58.753 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:58.753 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:58.753 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:58.753 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:58.753 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:58.753 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:58.753 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:58.753 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:58.753 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:18:59.320 496+0 records in 00:18:59.320 496+0 records out 00:18:59.320 97517568 bytes (98 MB, 93 MiB) copied, 0.575439 s, 169 MB/s 00:18:59.320 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:59.320 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:59.320 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:59.320 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:59.320 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:59.320 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:59.320 04:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:59.580 [2024-11-27 04:35:56.015078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.580 [2024-11-27 04:35:56.066689] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.580 "name": "raid_bdev1", 00:18:59.580 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:18:59.580 "strip_size_kb": 64, 00:18:59.580 "state": "online", 00:18:59.580 "raid_level": "raid5f", 00:18:59.580 "superblock": true, 00:18:59.580 "num_base_bdevs": 4, 00:18:59.580 "num_base_bdevs_discovered": 3, 00:18:59.580 "num_base_bdevs_operational": 3, 00:18:59.580 "base_bdevs_list": [ 00:18:59.580 { 00:18:59.580 "name": null, 00:18:59.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.580 "is_configured": false, 00:18:59.580 "data_offset": 0, 00:18:59.580 "data_size": 63488 00:18:59.580 }, 00:18:59.580 { 00:18:59.580 "name": "BaseBdev2", 00:18:59.580 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:18:59.580 "is_configured": true, 00:18:59.580 "data_offset": 2048, 00:18:59.580 "data_size": 63488 00:18:59.580 }, 00:18:59.580 { 00:18:59.580 "name": "BaseBdev3", 00:18:59.580 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:18:59.580 "is_configured": true, 00:18:59.580 "data_offset": 2048, 00:18:59.580 "data_size": 63488 00:18:59.580 }, 00:18:59.580 { 00:18:59.580 "name": "BaseBdev4", 00:18:59.580 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:18:59.580 "is_configured": true, 00:18:59.580 "data_offset": 2048, 00:18:59.580 "data_size": 63488 00:18:59.580 } 00:18:59.580 ] 00:18:59.580 }' 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.580 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.148 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:00.148 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.148 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.148 [2024-11-27 04:35:56.525923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:00.148 [2024-11-27 04:35:56.543617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:19:00.148 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.148 04:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:00.148 [2024-11-27 04:35:56.554598] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:01.086 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:01.086 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.086 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:01.086 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:01.086 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.086 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.086 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.086 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.086 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.086 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.086 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.086 "name": "raid_bdev1", 00:19:01.086 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:01.086 "strip_size_kb": 64, 00:19:01.086 "state": "online", 00:19:01.086 "raid_level": "raid5f", 00:19:01.086 "superblock": true, 00:19:01.086 "num_base_bdevs": 4, 00:19:01.086 "num_base_bdevs_discovered": 4, 00:19:01.086 "num_base_bdevs_operational": 4, 00:19:01.086 "process": { 00:19:01.086 "type": "rebuild", 00:19:01.086 "target": "spare", 00:19:01.086 "progress": { 00:19:01.086 "blocks": 17280, 00:19:01.086 "percent": 9 00:19:01.086 } 00:19:01.086 }, 00:19:01.086 "base_bdevs_list": [ 00:19:01.086 { 00:19:01.086 "name": "spare", 00:19:01.086 "uuid": "8529979e-8259-53fa-822e-81f31ebd964f", 00:19:01.086 "is_configured": true, 00:19:01.086 "data_offset": 2048, 00:19:01.086 "data_size": 63488 00:19:01.086 }, 00:19:01.086 { 00:19:01.086 "name": "BaseBdev2", 00:19:01.086 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:01.086 "is_configured": true, 00:19:01.086 "data_offset": 2048, 00:19:01.086 "data_size": 63488 00:19:01.086 }, 00:19:01.086 { 00:19:01.086 "name": "BaseBdev3", 00:19:01.086 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:01.086 "is_configured": true, 00:19:01.086 "data_offset": 2048, 00:19:01.086 "data_size": 63488 00:19:01.086 }, 00:19:01.086 { 00:19:01.086 "name": "BaseBdev4", 00:19:01.086 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:01.086 "is_configured": true, 00:19:01.086 "data_offset": 2048, 00:19:01.086 "data_size": 63488 00:19:01.086 } 00:19:01.086 ] 00:19:01.086 }' 00:19:01.086 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.086 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:01.086 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.345 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:01.345 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:01.345 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.345 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.345 [2024-11-27 04:35:57.690299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:01.345 [2024-11-27 04:35:57.764606] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:01.345 [2024-11-27 04:35:57.764713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.345 [2024-11-27 04:35:57.764737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:01.345 [2024-11-27 04:35:57.764749] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:01.345 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.345 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:01.345 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.345 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.345 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:01.345 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:01.345 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:01.345 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.345 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.345 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.345 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.345 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.345 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.345 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.345 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.345 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.345 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.345 "name": "raid_bdev1", 00:19:01.345 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:01.345 "strip_size_kb": 64, 00:19:01.345 "state": "online", 00:19:01.345 "raid_level": "raid5f", 00:19:01.345 "superblock": true, 00:19:01.345 "num_base_bdevs": 4, 00:19:01.345 "num_base_bdevs_discovered": 3, 00:19:01.345 "num_base_bdevs_operational": 3, 00:19:01.345 "base_bdevs_list": [ 00:19:01.345 { 00:19:01.345 "name": null, 00:19:01.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.345 "is_configured": false, 00:19:01.345 "data_offset": 0, 00:19:01.345 "data_size": 63488 00:19:01.345 }, 00:19:01.345 { 00:19:01.345 "name": "BaseBdev2", 00:19:01.345 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:01.345 "is_configured": true, 00:19:01.345 "data_offset": 2048, 00:19:01.345 "data_size": 63488 00:19:01.345 }, 00:19:01.345 { 00:19:01.345 "name": "BaseBdev3", 00:19:01.345 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:01.345 "is_configured": true, 00:19:01.345 "data_offset": 2048, 00:19:01.345 "data_size": 63488 00:19:01.345 }, 00:19:01.345 { 00:19:01.345 "name": "BaseBdev4", 00:19:01.345 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:01.345 "is_configured": true, 00:19:01.345 "data_offset": 2048, 00:19:01.345 "data_size": 63488 00:19:01.345 } 00:19:01.345 ] 00:19:01.345 }' 00:19:01.345 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.345 04:35:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.914 04:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:01.914 04:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.914 04:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:01.914 04:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:01.914 04:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.914 04:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.914 04:35:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.914 04:35:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.914 04:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.914 04:35:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.914 04:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.914 "name": "raid_bdev1", 00:19:01.914 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:01.914 "strip_size_kb": 64, 00:19:01.914 "state": "online", 00:19:01.914 "raid_level": "raid5f", 00:19:01.914 "superblock": true, 00:19:01.914 "num_base_bdevs": 4, 00:19:01.914 "num_base_bdevs_discovered": 3, 00:19:01.914 "num_base_bdevs_operational": 3, 00:19:01.914 "base_bdevs_list": [ 00:19:01.914 { 00:19:01.914 "name": null, 00:19:01.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.914 "is_configured": false, 00:19:01.914 "data_offset": 0, 00:19:01.914 "data_size": 63488 00:19:01.914 }, 00:19:01.914 { 00:19:01.914 "name": "BaseBdev2", 00:19:01.914 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:01.914 "is_configured": true, 00:19:01.914 "data_offset": 2048, 00:19:01.914 "data_size": 63488 00:19:01.914 }, 00:19:01.914 { 00:19:01.914 "name": "BaseBdev3", 00:19:01.914 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:01.914 "is_configured": true, 00:19:01.914 "data_offset": 2048, 00:19:01.914 "data_size": 63488 00:19:01.914 }, 00:19:01.914 { 00:19:01.914 "name": "BaseBdev4", 00:19:01.914 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:01.914 "is_configured": true, 00:19:01.914 "data_offset": 2048, 00:19:01.914 "data_size": 63488 00:19:01.914 } 00:19:01.914 ] 00:19:01.914 }' 00:19:01.914 04:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.914 04:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:01.914 04:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.914 04:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:01.914 04:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:01.914 04:35:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.914 04:35:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.914 [2024-11-27 04:35:58.429997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:01.914 [2024-11-27 04:35:58.449205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:19:01.914 04:35:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.914 04:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:01.914 [2024-11-27 04:35:58.461710] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:03.290 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:03.290 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:03.291 "name": "raid_bdev1", 00:19:03.291 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:03.291 "strip_size_kb": 64, 00:19:03.291 "state": "online", 00:19:03.291 "raid_level": "raid5f", 00:19:03.291 "superblock": true, 00:19:03.291 "num_base_bdevs": 4, 00:19:03.291 "num_base_bdevs_discovered": 4, 00:19:03.291 "num_base_bdevs_operational": 4, 00:19:03.291 "process": { 00:19:03.291 "type": "rebuild", 00:19:03.291 "target": "spare", 00:19:03.291 "progress": { 00:19:03.291 "blocks": 17280, 00:19:03.291 "percent": 9 00:19:03.291 } 00:19:03.291 }, 00:19:03.291 "base_bdevs_list": [ 00:19:03.291 { 00:19:03.291 "name": "spare", 00:19:03.291 "uuid": "8529979e-8259-53fa-822e-81f31ebd964f", 00:19:03.291 "is_configured": true, 00:19:03.291 "data_offset": 2048, 00:19:03.291 "data_size": 63488 00:19:03.291 }, 00:19:03.291 { 00:19:03.291 "name": "BaseBdev2", 00:19:03.291 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:03.291 "is_configured": true, 00:19:03.291 "data_offset": 2048, 00:19:03.291 "data_size": 63488 00:19:03.291 }, 00:19:03.291 { 00:19:03.291 "name": "BaseBdev3", 00:19:03.291 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:03.291 "is_configured": true, 00:19:03.291 "data_offset": 2048, 00:19:03.291 "data_size": 63488 00:19:03.291 }, 00:19:03.291 { 00:19:03.291 "name": "BaseBdev4", 00:19:03.291 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:03.291 "is_configured": true, 00:19:03.291 "data_offset": 2048, 00:19:03.291 "data_size": 63488 00:19:03.291 } 00:19:03.291 ] 00:19:03.291 }' 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:03.291 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=667 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:03.291 "name": "raid_bdev1", 00:19:03.291 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:03.291 "strip_size_kb": 64, 00:19:03.291 "state": "online", 00:19:03.291 "raid_level": "raid5f", 00:19:03.291 "superblock": true, 00:19:03.291 "num_base_bdevs": 4, 00:19:03.291 "num_base_bdevs_discovered": 4, 00:19:03.291 "num_base_bdevs_operational": 4, 00:19:03.291 "process": { 00:19:03.291 "type": "rebuild", 00:19:03.291 "target": "spare", 00:19:03.291 "progress": { 00:19:03.291 "blocks": 21120, 00:19:03.291 "percent": 11 00:19:03.291 } 00:19:03.291 }, 00:19:03.291 "base_bdevs_list": [ 00:19:03.291 { 00:19:03.291 "name": "spare", 00:19:03.291 "uuid": "8529979e-8259-53fa-822e-81f31ebd964f", 00:19:03.291 "is_configured": true, 00:19:03.291 "data_offset": 2048, 00:19:03.291 "data_size": 63488 00:19:03.291 }, 00:19:03.291 { 00:19:03.291 "name": "BaseBdev2", 00:19:03.291 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:03.291 "is_configured": true, 00:19:03.291 "data_offset": 2048, 00:19:03.291 "data_size": 63488 00:19:03.291 }, 00:19:03.291 { 00:19:03.291 "name": "BaseBdev3", 00:19:03.291 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:03.291 "is_configured": true, 00:19:03.291 "data_offset": 2048, 00:19:03.291 "data_size": 63488 00:19:03.291 }, 00:19:03.291 { 00:19:03.291 "name": "BaseBdev4", 00:19:03.291 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:03.291 "is_configured": true, 00:19:03.291 "data_offset": 2048, 00:19:03.291 "data_size": 63488 00:19:03.291 } 00:19:03.291 ] 00:19:03.291 }' 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:03.291 04:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:04.231 04:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:04.231 04:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:04.231 04:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:04.231 04:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:04.231 04:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:04.231 04:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:04.231 04:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.231 04:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.231 04:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.231 04:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.231 04:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.231 04:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:04.231 "name": "raid_bdev1", 00:19:04.231 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:04.231 "strip_size_kb": 64, 00:19:04.231 "state": "online", 00:19:04.231 "raid_level": "raid5f", 00:19:04.231 "superblock": true, 00:19:04.231 "num_base_bdevs": 4, 00:19:04.231 "num_base_bdevs_discovered": 4, 00:19:04.231 "num_base_bdevs_operational": 4, 00:19:04.231 "process": { 00:19:04.231 "type": "rebuild", 00:19:04.231 "target": "spare", 00:19:04.231 "progress": { 00:19:04.231 "blocks": 42240, 00:19:04.231 "percent": 22 00:19:04.231 } 00:19:04.231 }, 00:19:04.231 "base_bdevs_list": [ 00:19:04.231 { 00:19:04.231 "name": "spare", 00:19:04.231 "uuid": "8529979e-8259-53fa-822e-81f31ebd964f", 00:19:04.231 "is_configured": true, 00:19:04.231 "data_offset": 2048, 00:19:04.231 "data_size": 63488 00:19:04.231 }, 00:19:04.231 { 00:19:04.231 "name": "BaseBdev2", 00:19:04.231 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:04.231 "is_configured": true, 00:19:04.231 "data_offset": 2048, 00:19:04.231 "data_size": 63488 00:19:04.231 }, 00:19:04.231 { 00:19:04.231 "name": "BaseBdev3", 00:19:04.231 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:04.231 "is_configured": true, 00:19:04.231 "data_offset": 2048, 00:19:04.231 "data_size": 63488 00:19:04.231 }, 00:19:04.231 { 00:19:04.231 "name": "BaseBdev4", 00:19:04.231 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:04.231 "is_configured": true, 00:19:04.231 "data_offset": 2048, 00:19:04.231 "data_size": 63488 00:19:04.231 } 00:19:04.231 ] 00:19:04.231 }' 00:19:04.231 04:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:04.491 04:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:04.491 04:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:04.491 04:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:04.491 04:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:05.490 04:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:05.490 04:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:05.490 04:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:05.490 04:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:05.490 04:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:05.490 04:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:05.490 04:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.490 04:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.490 04:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.490 04:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.490 04:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.490 04:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:05.490 "name": "raid_bdev1", 00:19:05.490 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:05.490 "strip_size_kb": 64, 00:19:05.490 "state": "online", 00:19:05.490 "raid_level": "raid5f", 00:19:05.490 "superblock": true, 00:19:05.490 "num_base_bdevs": 4, 00:19:05.490 "num_base_bdevs_discovered": 4, 00:19:05.490 "num_base_bdevs_operational": 4, 00:19:05.490 "process": { 00:19:05.490 "type": "rebuild", 00:19:05.490 "target": "spare", 00:19:05.490 "progress": { 00:19:05.490 "blocks": 63360, 00:19:05.490 "percent": 33 00:19:05.490 } 00:19:05.490 }, 00:19:05.490 "base_bdevs_list": [ 00:19:05.490 { 00:19:05.490 "name": "spare", 00:19:05.490 "uuid": "8529979e-8259-53fa-822e-81f31ebd964f", 00:19:05.490 "is_configured": true, 00:19:05.490 "data_offset": 2048, 00:19:05.490 "data_size": 63488 00:19:05.490 }, 00:19:05.490 { 00:19:05.490 "name": "BaseBdev2", 00:19:05.490 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:05.490 "is_configured": true, 00:19:05.490 "data_offset": 2048, 00:19:05.490 "data_size": 63488 00:19:05.490 }, 00:19:05.490 { 00:19:05.490 "name": "BaseBdev3", 00:19:05.490 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:05.490 "is_configured": true, 00:19:05.490 "data_offset": 2048, 00:19:05.490 "data_size": 63488 00:19:05.490 }, 00:19:05.490 { 00:19:05.490 "name": "BaseBdev4", 00:19:05.490 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:05.490 "is_configured": true, 00:19:05.490 "data_offset": 2048, 00:19:05.490 "data_size": 63488 00:19:05.490 } 00:19:05.490 ] 00:19:05.490 }' 00:19:05.490 04:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:05.490 04:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:05.490 04:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:05.490 04:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:05.490 04:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:06.870 04:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:06.870 04:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:06.870 04:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:06.870 04:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:06.870 04:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:06.870 04:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:06.870 04:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.870 04:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.870 04:36:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.870 04:36:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.870 04:36:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.870 04:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:06.870 "name": "raid_bdev1", 00:19:06.870 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:06.870 "strip_size_kb": 64, 00:19:06.870 "state": "online", 00:19:06.870 "raid_level": "raid5f", 00:19:06.870 "superblock": true, 00:19:06.870 "num_base_bdevs": 4, 00:19:06.870 "num_base_bdevs_discovered": 4, 00:19:06.870 "num_base_bdevs_operational": 4, 00:19:06.870 "process": { 00:19:06.870 "type": "rebuild", 00:19:06.870 "target": "spare", 00:19:06.870 "progress": { 00:19:06.870 "blocks": 86400, 00:19:06.870 "percent": 45 00:19:06.870 } 00:19:06.870 }, 00:19:06.870 "base_bdevs_list": [ 00:19:06.870 { 00:19:06.870 "name": "spare", 00:19:06.870 "uuid": "8529979e-8259-53fa-822e-81f31ebd964f", 00:19:06.870 "is_configured": true, 00:19:06.870 "data_offset": 2048, 00:19:06.870 "data_size": 63488 00:19:06.870 }, 00:19:06.870 { 00:19:06.870 "name": "BaseBdev2", 00:19:06.870 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:06.870 "is_configured": true, 00:19:06.870 "data_offset": 2048, 00:19:06.870 "data_size": 63488 00:19:06.870 }, 00:19:06.870 { 00:19:06.870 "name": "BaseBdev3", 00:19:06.870 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:06.870 "is_configured": true, 00:19:06.870 "data_offset": 2048, 00:19:06.870 "data_size": 63488 00:19:06.870 }, 00:19:06.870 { 00:19:06.870 "name": "BaseBdev4", 00:19:06.870 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:06.870 "is_configured": true, 00:19:06.870 "data_offset": 2048, 00:19:06.870 "data_size": 63488 00:19:06.870 } 00:19:06.870 ] 00:19:06.870 }' 00:19:06.870 04:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:06.870 04:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:06.870 04:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:06.870 04:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:06.870 04:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:07.810 04:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:07.810 04:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:07.810 04:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:07.810 04:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:07.810 04:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:07.810 04:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:07.810 04:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.810 04:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.810 04:36:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.810 04:36:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.810 04:36:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.810 04:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:07.810 "name": "raid_bdev1", 00:19:07.810 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:07.810 "strip_size_kb": 64, 00:19:07.810 "state": "online", 00:19:07.810 "raid_level": "raid5f", 00:19:07.810 "superblock": true, 00:19:07.810 "num_base_bdevs": 4, 00:19:07.810 "num_base_bdevs_discovered": 4, 00:19:07.810 "num_base_bdevs_operational": 4, 00:19:07.810 "process": { 00:19:07.810 "type": "rebuild", 00:19:07.810 "target": "spare", 00:19:07.810 "progress": { 00:19:07.810 "blocks": 107520, 00:19:07.810 "percent": 56 00:19:07.810 } 00:19:07.810 }, 00:19:07.810 "base_bdevs_list": [ 00:19:07.810 { 00:19:07.810 "name": "spare", 00:19:07.810 "uuid": "8529979e-8259-53fa-822e-81f31ebd964f", 00:19:07.810 "is_configured": true, 00:19:07.810 "data_offset": 2048, 00:19:07.810 "data_size": 63488 00:19:07.810 }, 00:19:07.810 { 00:19:07.810 "name": "BaseBdev2", 00:19:07.810 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:07.810 "is_configured": true, 00:19:07.810 "data_offset": 2048, 00:19:07.810 "data_size": 63488 00:19:07.810 }, 00:19:07.810 { 00:19:07.810 "name": "BaseBdev3", 00:19:07.810 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:07.810 "is_configured": true, 00:19:07.810 "data_offset": 2048, 00:19:07.810 "data_size": 63488 00:19:07.810 }, 00:19:07.810 { 00:19:07.810 "name": "BaseBdev4", 00:19:07.810 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:07.810 "is_configured": true, 00:19:07.810 "data_offset": 2048, 00:19:07.810 "data_size": 63488 00:19:07.810 } 00:19:07.810 ] 00:19:07.810 }' 00:19:07.810 04:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:07.810 04:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:07.810 04:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:07.810 04:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:07.810 04:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:08.816 04:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:08.816 04:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:08.816 04:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:08.816 04:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:08.816 04:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:08.816 04:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:08.816 04:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.816 04:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.816 04:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.816 04:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.816 04:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.816 04:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:08.817 "name": "raid_bdev1", 00:19:08.817 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:08.817 "strip_size_kb": 64, 00:19:08.817 "state": "online", 00:19:08.817 "raid_level": "raid5f", 00:19:08.817 "superblock": true, 00:19:08.817 "num_base_bdevs": 4, 00:19:08.817 "num_base_bdevs_discovered": 4, 00:19:08.817 "num_base_bdevs_operational": 4, 00:19:08.817 "process": { 00:19:08.817 "type": "rebuild", 00:19:08.817 "target": "spare", 00:19:08.817 "progress": { 00:19:08.817 "blocks": 130560, 00:19:08.817 "percent": 68 00:19:08.817 } 00:19:08.817 }, 00:19:08.817 "base_bdevs_list": [ 00:19:08.817 { 00:19:08.817 "name": "spare", 00:19:08.817 "uuid": "8529979e-8259-53fa-822e-81f31ebd964f", 00:19:08.817 "is_configured": true, 00:19:08.817 "data_offset": 2048, 00:19:08.817 "data_size": 63488 00:19:08.817 }, 00:19:08.817 { 00:19:08.817 "name": "BaseBdev2", 00:19:08.817 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:08.817 "is_configured": true, 00:19:08.817 "data_offset": 2048, 00:19:08.817 "data_size": 63488 00:19:08.817 }, 00:19:08.817 { 00:19:08.817 "name": "BaseBdev3", 00:19:08.817 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:08.817 "is_configured": true, 00:19:08.817 "data_offset": 2048, 00:19:08.817 "data_size": 63488 00:19:08.817 }, 00:19:08.817 { 00:19:08.817 "name": "BaseBdev4", 00:19:08.817 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:08.817 "is_configured": true, 00:19:08.817 "data_offset": 2048, 00:19:08.817 "data_size": 63488 00:19:08.817 } 00:19:08.817 ] 00:19:08.817 }' 00:19:08.817 04:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:09.077 04:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:09.077 04:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:09.077 04:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:09.077 04:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:10.018 04:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:10.018 04:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:10.018 04:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:10.018 04:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:10.018 04:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:10.018 04:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:10.018 04:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.018 04:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.018 04:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.018 04:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.018 04:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.018 04:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:10.018 "name": "raid_bdev1", 00:19:10.018 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:10.018 "strip_size_kb": 64, 00:19:10.018 "state": "online", 00:19:10.018 "raid_level": "raid5f", 00:19:10.018 "superblock": true, 00:19:10.018 "num_base_bdevs": 4, 00:19:10.018 "num_base_bdevs_discovered": 4, 00:19:10.018 "num_base_bdevs_operational": 4, 00:19:10.018 "process": { 00:19:10.018 "type": "rebuild", 00:19:10.018 "target": "spare", 00:19:10.018 "progress": { 00:19:10.018 "blocks": 151680, 00:19:10.018 "percent": 79 00:19:10.018 } 00:19:10.018 }, 00:19:10.018 "base_bdevs_list": [ 00:19:10.018 { 00:19:10.018 "name": "spare", 00:19:10.018 "uuid": "8529979e-8259-53fa-822e-81f31ebd964f", 00:19:10.018 "is_configured": true, 00:19:10.018 "data_offset": 2048, 00:19:10.018 "data_size": 63488 00:19:10.018 }, 00:19:10.018 { 00:19:10.018 "name": "BaseBdev2", 00:19:10.018 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:10.018 "is_configured": true, 00:19:10.018 "data_offset": 2048, 00:19:10.018 "data_size": 63488 00:19:10.018 }, 00:19:10.018 { 00:19:10.018 "name": "BaseBdev3", 00:19:10.018 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:10.018 "is_configured": true, 00:19:10.018 "data_offset": 2048, 00:19:10.018 "data_size": 63488 00:19:10.018 }, 00:19:10.018 { 00:19:10.018 "name": "BaseBdev4", 00:19:10.018 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:10.018 "is_configured": true, 00:19:10.018 "data_offset": 2048, 00:19:10.018 "data_size": 63488 00:19:10.018 } 00:19:10.018 ] 00:19:10.018 }' 00:19:10.018 04:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:10.018 04:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:10.018 04:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.018 04:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:10.278 04:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:11.218 04:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:11.218 04:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:11.218 04:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.218 04:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:11.218 04:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:11.218 04:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.218 04:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.218 04:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.218 04:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.218 04:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.218 04:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.218 04:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.218 "name": "raid_bdev1", 00:19:11.218 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:11.218 "strip_size_kb": 64, 00:19:11.218 "state": "online", 00:19:11.218 "raid_level": "raid5f", 00:19:11.218 "superblock": true, 00:19:11.218 "num_base_bdevs": 4, 00:19:11.218 "num_base_bdevs_discovered": 4, 00:19:11.218 "num_base_bdevs_operational": 4, 00:19:11.218 "process": { 00:19:11.218 "type": "rebuild", 00:19:11.218 "target": "spare", 00:19:11.218 "progress": { 00:19:11.218 "blocks": 172800, 00:19:11.218 "percent": 90 00:19:11.218 } 00:19:11.218 }, 00:19:11.218 "base_bdevs_list": [ 00:19:11.218 { 00:19:11.218 "name": "spare", 00:19:11.218 "uuid": "8529979e-8259-53fa-822e-81f31ebd964f", 00:19:11.218 "is_configured": true, 00:19:11.218 "data_offset": 2048, 00:19:11.218 "data_size": 63488 00:19:11.218 }, 00:19:11.218 { 00:19:11.218 "name": "BaseBdev2", 00:19:11.218 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:11.218 "is_configured": true, 00:19:11.218 "data_offset": 2048, 00:19:11.218 "data_size": 63488 00:19:11.218 }, 00:19:11.218 { 00:19:11.219 "name": "BaseBdev3", 00:19:11.219 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:11.219 "is_configured": true, 00:19:11.219 "data_offset": 2048, 00:19:11.219 "data_size": 63488 00:19:11.219 }, 00:19:11.219 { 00:19:11.219 "name": "BaseBdev4", 00:19:11.219 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:11.219 "is_configured": true, 00:19:11.219 "data_offset": 2048, 00:19:11.219 "data_size": 63488 00:19:11.219 } 00:19:11.219 ] 00:19:11.219 }' 00:19:11.219 04:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.219 04:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:11.219 04:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.219 04:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:11.219 04:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:12.157 [2024-11-27 04:36:08.537546] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:12.157 [2024-11-27 04:36:08.537718] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:12.157 [2024-11-27 04:36:08.537935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.416 "name": "raid_bdev1", 00:19:12.416 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:12.416 "strip_size_kb": 64, 00:19:12.416 "state": "online", 00:19:12.416 "raid_level": "raid5f", 00:19:12.416 "superblock": true, 00:19:12.416 "num_base_bdevs": 4, 00:19:12.416 "num_base_bdevs_discovered": 4, 00:19:12.416 "num_base_bdevs_operational": 4, 00:19:12.416 "base_bdevs_list": [ 00:19:12.416 { 00:19:12.416 "name": "spare", 00:19:12.416 "uuid": "8529979e-8259-53fa-822e-81f31ebd964f", 00:19:12.416 "is_configured": true, 00:19:12.416 "data_offset": 2048, 00:19:12.416 "data_size": 63488 00:19:12.416 }, 00:19:12.416 { 00:19:12.416 "name": "BaseBdev2", 00:19:12.416 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:12.416 "is_configured": true, 00:19:12.416 "data_offset": 2048, 00:19:12.416 "data_size": 63488 00:19:12.416 }, 00:19:12.416 { 00:19:12.416 "name": "BaseBdev3", 00:19:12.416 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:12.416 "is_configured": true, 00:19:12.416 "data_offset": 2048, 00:19:12.416 "data_size": 63488 00:19:12.416 }, 00:19:12.416 { 00:19:12.416 "name": "BaseBdev4", 00:19:12.416 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:12.416 "is_configured": true, 00:19:12.416 "data_offset": 2048, 00:19:12.416 "data_size": 63488 00:19:12.416 } 00:19:12.416 ] 00:19:12.416 }' 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.416 "name": "raid_bdev1", 00:19:12.416 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:12.416 "strip_size_kb": 64, 00:19:12.416 "state": "online", 00:19:12.416 "raid_level": "raid5f", 00:19:12.416 "superblock": true, 00:19:12.416 "num_base_bdevs": 4, 00:19:12.416 "num_base_bdevs_discovered": 4, 00:19:12.416 "num_base_bdevs_operational": 4, 00:19:12.416 "base_bdevs_list": [ 00:19:12.416 { 00:19:12.416 "name": "spare", 00:19:12.416 "uuid": "8529979e-8259-53fa-822e-81f31ebd964f", 00:19:12.416 "is_configured": true, 00:19:12.416 "data_offset": 2048, 00:19:12.416 "data_size": 63488 00:19:12.416 }, 00:19:12.416 { 00:19:12.416 "name": "BaseBdev2", 00:19:12.416 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:12.416 "is_configured": true, 00:19:12.416 "data_offset": 2048, 00:19:12.416 "data_size": 63488 00:19:12.416 }, 00:19:12.416 { 00:19:12.416 "name": "BaseBdev3", 00:19:12.416 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:12.416 "is_configured": true, 00:19:12.416 "data_offset": 2048, 00:19:12.416 "data_size": 63488 00:19:12.416 }, 00:19:12.416 { 00:19:12.416 "name": "BaseBdev4", 00:19:12.416 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:12.416 "is_configured": true, 00:19:12.416 "data_offset": 2048, 00:19:12.416 "data_size": 63488 00:19:12.416 } 00:19:12.416 ] 00:19:12.416 }' 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:12.416 04:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.675 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:12.675 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:12.675 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.675 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.675 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:12.675 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:12.675 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:12.675 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.675 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.675 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.676 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.676 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.676 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.676 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.676 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.676 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.676 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.676 "name": "raid_bdev1", 00:19:12.676 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:12.676 "strip_size_kb": 64, 00:19:12.676 "state": "online", 00:19:12.676 "raid_level": "raid5f", 00:19:12.676 "superblock": true, 00:19:12.676 "num_base_bdevs": 4, 00:19:12.676 "num_base_bdevs_discovered": 4, 00:19:12.676 "num_base_bdevs_operational": 4, 00:19:12.676 "base_bdevs_list": [ 00:19:12.676 { 00:19:12.676 "name": "spare", 00:19:12.676 "uuid": "8529979e-8259-53fa-822e-81f31ebd964f", 00:19:12.676 "is_configured": true, 00:19:12.676 "data_offset": 2048, 00:19:12.676 "data_size": 63488 00:19:12.676 }, 00:19:12.676 { 00:19:12.676 "name": "BaseBdev2", 00:19:12.676 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:12.676 "is_configured": true, 00:19:12.676 "data_offset": 2048, 00:19:12.676 "data_size": 63488 00:19:12.676 }, 00:19:12.676 { 00:19:12.676 "name": "BaseBdev3", 00:19:12.676 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:12.676 "is_configured": true, 00:19:12.676 "data_offset": 2048, 00:19:12.676 "data_size": 63488 00:19:12.676 }, 00:19:12.676 { 00:19:12.676 "name": "BaseBdev4", 00:19:12.676 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:12.676 "is_configured": true, 00:19:12.676 "data_offset": 2048, 00:19:12.676 "data_size": 63488 00:19:12.676 } 00:19:12.676 ] 00:19:12.676 }' 00:19:12.676 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.676 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.935 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:12.935 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.935 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.935 [2024-11-27 04:36:09.460439] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:12.935 [2024-11-27 04:36:09.460477] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:12.935 [2024-11-27 04:36:09.460574] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:12.935 [2024-11-27 04:36:09.460685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:12.935 [2024-11-27 04:36:09.460718] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:12.935 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.935 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.935 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:12.935 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.935 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.935 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.935 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:12.935 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:12.935 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:12.935 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:12.935 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:12.935 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:12.935 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:12.935 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:12.935 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:12.935 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:12.935 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:12.935 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:12.935 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:13.194 /dev/nbd0 00:19:13.194 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:13.194 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:13.194 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:13.194 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:13.194 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:13.194 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:13.194 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:13.194 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:13.194 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:13.194 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:13.194 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:13.194 1+0 records in 00:19:13.194 1+0 records out 00:19:13.194 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286194 s, 14.3 MB/s 00:19:13.454 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:13.454 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:13.454 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:13.454 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:13.454 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:13.454 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:13.454 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:13.454 04:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:13.454 /dev/nbd1 00:19:13.454 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:13.454 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:13.454 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:13.454 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:13.454 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:13.454 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:13.454 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:13.454 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:13.454 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:13.454 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:13.454 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:13.454 1+0 records in 00:19:13.454 1+0 records out 00:19:13.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392962 s, 10.4 MB/s 00:19:13.454 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:13.454 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:13.454 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:13.713 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:13.713 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:13.713 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:13.713 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:13.713 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:13.713 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:13.713 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:13.713 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:13.713 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:13.713 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:13.713 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:13.713 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:13.971 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:13.971 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:13.971 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:13.971 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:13.971 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:13.971 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:13.971 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:13.971 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:13.971 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:13.971 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:14.229 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:14.229 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:14.229 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:14.229 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:14.229 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:14.229 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:14.229 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:14.229 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:14.229 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:14.229 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:14.229 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.229 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.229 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.229 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:14.229 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.229 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.229 [2024-11-27 04:36:10.761900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:14.229 [2024-11-27 04:36:10.762003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:14.229 [2024-11-27 04:36:10.762072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:14.229 [2024-11-27 04:36:10.762126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:14.229 [2024-11-27 04:36:10.764831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:14.229 [2024-11-27 04:36:10.764909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:14.229 [2024-11-27 04:36:10.765042] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:14.229 [2024-11-27 04:36:10.765153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:14.229 [2024-11-27 04:36:10.765349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:14.229 [2024-11-27 04:36:10.765521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:14.229 [2024-11-27 04:36:10.765670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:14.229 spare 00:19:14.229 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.229 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:14.229 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.229 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.487 [2024-11-27 04:36:10.865638] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:14.487 [2024-11-27 04:36:10.865756] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:14.487 [2024-11-27 04:36:10.866196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:19:14.487 [2024-11-27 04:36:10.875850] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:14.487 [2024-11-27 04:36:10.875921] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:14.487 [2024-11-27 04:36:10.876223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.487 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.487 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:14.487 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:14.487 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:14.487 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:14.487 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:14.487 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:14.487 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.487 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.487 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.487 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.487 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.487 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.487 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.487 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.487 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.487 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.487 "name": "raid_bdev1", 00:19:14.487 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:14.487 "strip_size_kb": 64, 00:19:14.487 "state": "online", 00:19:14.487 "raid_level": "raid5f", 00:19:14.487 "superblock": true, 00:19:14.487 "num_base_bdevs": 4, 00:19:14.487 "num_base_bdevs_discovered": 4, 00:19:14.487 "num_base_bdevs_operational": 4, 00:19:14.487 "base_bdevs_list": [ 00:19:14.487 { 00:19:14.487 "name": "spare", 00:19:14.487 "uuid": "8529979e-8259-53fa-822e-81f31ebd964f", 00:19:14.487 "is_configured": true, 00:19:14.487 "data_offset": 2048, 00:19:14.487 "data_size": 63488 00:19:14.487 }, 00:19:14.487 { 00:19:14.487 "name": "BaseBdev2", 00:19:14.487 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:14.487 "is_configured": true, 00:19:14.487 "data_offset": 2048, 00:19:14.487 "data_size": 63488 00:19:14.487 }, 00:19:14.487 { 00:19:14.487 "name": "BaseBdev3", 00:19:14.487 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:14.487 "is_configured": true, 00:19:14.487 "data_offset": 2048, 00:19:14.487 "data_size": 63488 00:19:14.487 }, 00:19:14.487 { 00:19:14.487 "name": "BaseBdev4", 00:19:14.487 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:14.487 "is_configured": true, 00:19:14.487 "data_offset": 2048, 00:19:14.487 "data_size": 63488 00:19:14.487 } 00:19:14.487 ] 00:19:14.487 }' 00:19:14.487 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.487 04:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.053 "name": "raid_bdev1", 00:19:15.053 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:15.053 "strip_size_kb": 64, 00:19:15.053 "state": "online", 00:19:15.053 "raid_level": "raid5f", 00:19:15.053 "superblock": true, 00:19:15.053 "num_base_bdevs": 4, 00:19:15.053 "num_base_bdevs_discovered": 4, 00:19:15.053 "num_base_bdevs_operational": 4, 00:19:15.053 "base_bdevs_list": [ 00:19:15.053 { 00:19:15.053 "name": "spare", 00:19:15.053 "uuid": "8529979e-8259-53fa-822e-81f31ebd964f", 00:19:15.053 "is_configured": true, 00:19:15.053 "data_offset": 2048, 00:19:15.053 "data_size": 63488 00:19:15.053 }, 00:19:15.053 { 00:19:15.053 "name": "BaseBdev2", 00:19:15.053 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:15.053 "is_configured": true, 00:19:15.053 "data_offset": 2048, 00:19:15.053 "data_size": 63488 00:19:15.053 }, 00:19:15.053 { 00:19:15.053 "name": "BaseBdev3", 00:19:15.053 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:15.053 "is_configured": true, 00:19:15.053 "data_offset": 2048, 00:19:15.053 "data_size": 63488 00:19:15.053 }, 00:19:15.053 { 00:19:15.053 "name": "BaseBdev4", 00:19:15.053 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:15.053 "is_configured": true, 00:19:15.053 "data_offset": 2048, 00:19:15.053 "data_size": 63488 00:19:15.053 } 00:19:15.053 ] 00:19:15.053 }' 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.053 [2024-11-27 04:36:11.573016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.053 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.053 "name": "raid_bdev1", 00:19:15.053 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:15.053 "strip_size_kb": 64, 00:19:15.053 "state": "online", 00:19:15.053 "raid_level": "raid5f", 00:19:15.053 "superblock": true, 00:19:15.053 "num_base_bdevs": 4, 00:19:15.053 "num_base_bdevs_discovered": 3, 00:19:15.053 "num_base_bdevs_operational": 3, 00:19:15.053 "base_bdevs_list": [ 00:19:15.053 { 00:19:15.053 "name": null, 00:19:15.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.053 "is_configured": false, 00:19:15.053 "data_offset": 0, 00:19:15.053 "data_size": 63488 00:19:15.053 }, 00:19:15.053 { 00:19:15.053 "name": "BaseBdev2", 00:19:15.053 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:15.053 "is_configured": true, 00:19:15.053 "data_offset": 2048, 00:19:15.053 "data_size": 63488 00:19:15.053 }, 00:19:15.053 { 00:19:15.053 "name": "BaseBdev3", 00:19:15.053 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:15.053 "is_configured": true, 00:19:15.053 "data_offset": 2048, 00:19:15.053 "data_size": 63488 00:19:15.053 }, 00:19:15.053 { 00:19:15.053 "name": "BaseBdev4", 00:19:15.054 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:15.054 "is_configured": true, 00:19:15.054 "data_offset": 2048, 00:19:15.054 "data_size": 63488 00:19:15.054 } 00:19:15.054 ] 00:19:15.054 }' 00:19:15.054 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.054 04:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.620 04:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:15.620 04:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.620 04:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.620 [2024-11-27 04:36:12.012326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:15.620 [2024-11-27 04:36:12.012601] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:15.620 [2024-11-27 04:36:12.012683] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:15.620 [2024-11-27 04:36:12.012760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:15.620 [2024-11-27 04:36:12.030641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:19:15.620 04:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.620 04:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:15.620 [2024-11-27 04:36:12.042235] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:16.556 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:16.556 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:16.556 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:16.556 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:16.556 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:16.556 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.556 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.556 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.556 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.556 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.556 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:16.556 "name": "raid_bdev1", 00:19:16.556 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:16.556 "strip_size_kb": 64, 00:19:16.556 "state": "online", 00:19:16.556 "raid_level": "raid5f", 00:19:16.556 "superblock": true, 00:19:16.556 "num_base_bdevs": 4, 00:19:16.556 "num_base_bdevs_discovered": 4, 00:19:16.556 "num_base_bdevs_operational": 4, 00:19:16.556 "process": { 00:19:16.556 "type": "rebuild", 00:19:16.556 "target": "spare", 00:19:16.556 "progress": { 00:19:16.556 "blocks": 17280, 00:19:16.556 "percent": 9 00:19:16.556 } 00:19:16.556 }, 00:19:16.556 "base_bdevs_list": [ 00:19:16.556 { 00:19:16.556 "name": "spare", 00:19:16.556 "uuid": "8529979e-8259-53fa-822e-81f31ebd964f", 00:19:16.556 "is_configured": true, 00:19:16.556 "data_offset": 2048, 00:19:16.556 "data_size": 63488 00:19:16.556 }, 00:19:16.556 { 00:19:16.556 "name": "BaseBdev2", 00:19:16.556 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:16.556 "is_configured": true, 00:19:16.556 "data_offset": 2048, 00:19:16.556 "data_size": 63488 00:19:16.556 }, 00:19:16.556 { 00:19:16.556 "name": "BaseBdev3", 00:19:16.556 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:16.556 "is_configured": true, 00:19:16.556 "data_offset": 2048, 00:19:16.556 "data_size": 63488 00:19:16.556 }, 00:19:16.556 { 00:19:16.556 "name": "BaseBdev4", 00:19:16.556 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:16.556 "is_configured": true, 00:19:16.556 "data_offset": 2048, 00:19:16.556 "data_size": 63488 00:19:16.556 } 00:19:16.556 ] 00:19:16.556 }' 00:19:16.556 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:16.556 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:16.556 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.815 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:16.815 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:16.815 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.815 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.815 [2024-11-27 04:36:13.169879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:16.815 [2024-11-27 04:36:13.252025] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:16.815 [2024-11-27 04:36:13.252261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:16.815 [2024-11-27 04:36:13.252302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:16.815 [2024-11-27 04:36:13.252320] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:16.815 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.815 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:16.815 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:16.815 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:16.815 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:16.815 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:16.815 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:16.815 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.816 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.816 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.816 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.816 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.816 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.816 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.816 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.816 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.816 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.816 "name": "raid_bdev1", 00:19:16.816 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:16.816 "strip_size_kb": 64, 00:19:16.816 "state": "online", 00:19:16.816 "raid_level": "raid5f", 00:19:16.816 "superblock": true, 00:19:16.816 "num_base_bdevs": 4, 00:19:16.816 "num_base_bdevs_discovered": 3, 00:19:16.816 "num_base_bdevs_operational": 3, 00:19:16.816 "base_bdevs_list": [ 00:19:16.816 { 00:19:16.816 "name": null, 00:19:16.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.816 "is_configured": false, 00:19:16.816 "data_offset": 0, 00:19:16.816 "data_size": 63488 00:19:16.816 }, 00:19:16.816 { 00:19:16.816 "name": "BaseBdev2", 00:19:16.816 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:16.816 "is_configured": true, 00:19:16.816 "data_offset": 2048, 00:19:16.816 "data_size": 63488 00:19:16.816 }, 00:19:16.816 { 00:19:16.816 "name": "BaseBdev3", 00:19:16.816 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:16.816 "is_configured": true, 00:19:16.816 "data_offset": 2048, 00:19:16.816 "data_size": 63488 00:19:16.816 }, 00:19:16.816 { 00:19:16.816 "name": "BaseBdev4", 00:19:16.816 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:16.816 "is_configured": true, 00:19:16.816 "data_offset": 2048, 00:19:16.816 "data_size": 63488 00:19:16.816 } 00:19:16.816 ] 00:19:16.816 }' 00:19:16.816 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.816 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.384 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:17.384 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.384 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.384 [2024-11-27 04:36:13.784414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:17.384 [2024-11-27 04:36:13.784576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.384 [2024-11-27 04:36:13.784633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:19:17.384 [2024-11-27 04:36:13.784718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.384 [2024-11-27 04:36:13.785389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.384 [2024-11-27 04:36:13.785475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:17.384 [2024-11-27 04:36:13.785645] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:17.384 [2024-11-27 04:36:13.785702] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:17.384 [2024-11-27 04:36:13.785758] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:17.384 [2024-11-27 04:36:13.785822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:17.384 [2024-11-27 04:36:13.805121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:19:17.384 spare 00:19:17.384 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.384 04:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:17.384 [2024-11-27 04:36:13.817330] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:18.323 04:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:18.323 04:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.323 04:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:18.323 04:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:18.323 04:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.323 04:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.323 04:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.323 04:36:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.323 04:36:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.323 04:36:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.323 04:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.323 "name": "raid_bdev1", 00:19:18.323 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:18.323 "strip_size_kb": 64, 00:19:18.323 "state": "online", 00:19:18.323 "raid_level": "raid5f", 00:19:18.323 "superblock": true, 00:19:18.323 "num_base_bdevs": 4, 00:19:18.323 "num_base_bdevs_discovered": 4, 00:19:18.323 "num_base_bdevs_operational": 4, 00:19:18.323 "process": { 00:19:18.323 "type": "rebuild", 00:19:18.323 "target": "spare", 00:19:18.323 "progress": { 00:19:18.323 "blocks": 17280, 00:19:18.323 "percent": 9 00:19:18.323 } 00:19:18.323 }, 00:19:18.323 "base_bdevs_list": [ 00:19:18.323 { 00:19:18.323 "name": "spare", 00:19:18.323 "uuid": "8529979e-8259-53fa-822e-81f31ebd964f", 00:19:18.323 "is_configured": true, 00:19:18.323 "data_offset": 2048, 00:19:18.323 "data_size": 63488 00:19:18.323 }, 00:19:18.323 { 00:19:18.323 "name": "BaseBdev2", 00:19:18.323 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:18.323 "is_configured": true, 00:19:18.323 "data_offset": 2048, 00:19:18.323 "data_size": 63488 00:19:18.323 }, 00:19:18.323 { 00:19:18.323 "name": "BaseBdev3", 00:19:18.323 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:18.323 "is_configured": true, 00:19:18.323 "data_offset": 2048, 00:19:18.323 "data_size": 63488 00:19:18.323 }, 00:19:18.323 { 00:19:18.323 "name": "BaseBdev4", 00:19:18.323 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:18.323 "is_configured": true, 00:19:18.323 "data_offset": 2048, 00:19:18.323 "data_size": 63488 00:19:18.323 } 00:19:18.323 ] 00:19:18.323 }' 00:19:18.323 04:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.583 04:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:18.583 04:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.583 04:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:18.583 04:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:18.583 04:36:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.583 04:36:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.583 [2024-11-27 04:36:14.981298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:18.583 [2024-11-27 04:36:15.027554] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:18.583 [2024-11-27 04:36:15.027711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.583 [2024-11-27 04:36:15.027740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:18.583 [2024-11-27 04:36:15.027750] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:18.583 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.583 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:18.583 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.583 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.583 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:18.583 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:18.583 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:18.583 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.583 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.583 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.583 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.583 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.583 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.583 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.583 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.583 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.583 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.583 "name": "raid_bdev1", 00:19:18.584 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:18.584 "strip_size_kb": 64, 00:19:18.584 "state": "online", 00:19:18.584 "raid_level": "raid5f", 00:19:18.584 "superblock": true, 00:19:18.584 "num_base_bdevs": 4, 00:19:18.584 "num_base_bdevs_discovered": 3, 00:19:18.584 "num_base_bdevs_operational": 3, 00:19:18.584 "base_bdevs_list": [ 00:19:18.584 { 00:19:18.584 "name": null, 00:19:18.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.584 "is_configured": false, 00:19:18.584 "data_offset": 0, 00:19:18.584 "data_size": 63488 00:19:18.584 }, 00:19:18.584 { 00:19:18.584 "name": "BaseBdev2", 00:19:18.584 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:18.584 "is_configured": true, 00:19:18.584 "data_offset": 2048, 00:19:18.584 "data_size": 63488 00:19:18.584 }, 00:19:18.584 { 00:19:18.584 "name": "BaseBdev3", 00:19:18.584 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:18.584 "is_configured": true, 00:19:18.584 "data_offset": 2048, 00:19:18.584 "data_size": 63488 00:19:18.584 }, 00:19:18.584 { 00:19:18.584 "name": "BaseBdev4", 00:19:18.584 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:18.584 "is_configured": true, 00:19:18.584 "data_offset": 2048, 00:19:18.584 "data_size": 63488 00:19:18.584 } 00:19:18.584 ] 00:19:18.584 }' 00:19:18.584 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.584 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.153 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:19.153 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:19.153 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:19.153 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:19.153 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:19.153 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.153 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.153 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.153 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.153 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.153 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:19.153 "name": "raid_bdev1", 00:19:19.153 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:19.153 "strip_size_kb": 64, 00:19:19.153 "state": "online", 00:19:19.153 "raid_level": "raid5f", 00:19:19.153 "superblock": true, 00:19:19.153 "num_base_bdevs": 4, 00:19:19.153 "num_base_bdevs_discovered": 3, 00:19:19.153 "num_base_bdevs_operational": 3, 00:19:19.153 "base_bdevs_list": [ 00:19:19.153 { 00:19:19.153 "name": null, 00:19:19.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.153 "is_configured": false, 00:19:19.153 "data_offset": 0, 00:19:19.153 "data_size": 63488 00:19:19.153 }, 00:19:19.153 { 00:19:19.153 "name": "BaseBdev2", 00:19:19.153 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:19.153 "is_configured": true, 00:19:19.153 "data_offset": 2048, 00:19:19.153 "data_size": 63488 00:19:19.153 }, 00:19:19.153 { 00:19:19.153 "name": "BaseBdev3", 00:19:19.153 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:19.153 "is_configured": true, 00:19:19.153 "data_offset": 2048, 00:19:19.153 "data_size": 63488 00:19:19.153 }, 00:19:19.153 { 00:19:19.154 "name": "BaseBdev4", 00:19:19.154 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:19.154 "is_configured": true, 00:19:19.154 "data_offset": 2048, 00:19:19.154 "data_size": 63488 00:19:19.154 } 00:19:19.154 ] 00:19:19.154 }' 00:19:19.154 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:19.154 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:19.154 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.154 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:19.154 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:19.154 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.154 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.154 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.154 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:19.154 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.154 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.154 [2024-11-27 04:36:15.616697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:19.154 [2024-11-27 04:36:15.616810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.154 [2024-11-27 04:36:15.616840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:19:19.154 [2024-11-27 04:36:15.616850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.154 [2024-11-27 04:36:15.617411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.154 [2024-11-27 04:36:15.617432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:19.154 [2024-11-27 04:36:15.617521] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:19.154 [2024-11-27 04:36:15.617536] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:19.154 [2024-11-27 04:36:15.617550] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:19.154 [2024-11-27 04:36:15.617561] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:19.154 BaseBdev1 00:19:19.154 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.154 04:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:20.093 04:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:20.093 04:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.093 04:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.093 04:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:20.093 04:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:20.093 04:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:20.093 04:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.093 04:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.093 04:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.093 04:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.093 04:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.093 04:36:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.093 04:36:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.093 04:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.093 04:36:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.353 04:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.353 "name": "raid_bdev1", 00:19:20.353 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:20.353 "strip_size_kb": 64, 00:19:20.354 "state": "online", 00:19:20.354 "raid_level": "raid5f", 00:19:20.354 "superblock": true, 00:19:20.354 "num_base_bdevs": 4, 00:19:20.354 "num_base_bdevs_discovered": 3, 00:19:20.354 "num_base_bdevs_operational": 3, 00:19:20.354 "base_bdevs_list": [ 00:19:20.354 { 00:19:20.354 "name": null, 00:19:20.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.354 "is_configured": false, 00:19:20.354 "data_offset": 0, 00:19:20.354 "data_size": 63488 00:19:20.354 }, 00:19:20.354 { 00:19:20.354 "name": "BaseBdev2", 00:19:20.354 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:20.354 "is_configured": true, 00:19:20.354 "data_offset": 2048, 00:19:20.354 "data_size": 63488 00:19:20.354 }, 00:19:20.354 { 00:19:20.354 "name": "BaseBdev3", 00:19:20.354 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:20.354 "is_configured": true, 00:19:20.354 "data_offset": 2048, 00:19:20.354 "data_size": 63488 00:19:20.354 }, 00:19:20.354 { 00:19:20.354 "name": "BaseBdev4", 00:19:20.354 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:20.354 "is_configured": true, 00:19:20.354 "data_offset": 2048, 00:19:20.354 "data_size": 63488 00:19:20.354 } 00:19:20.354 ] 00:19:20.354 }' 00:19:20.354 04:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.354 04:36:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.614 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:20.614 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.614 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:20.614 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:20.614 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.614 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.614 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.614 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.614 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.614 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.614 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.614 "name": "raid_bdev1", 00:19:20.614 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:20.614 "strip_size_kb": 64, 00:19:20.614 "state": "online", 00:19:20.614 "raid_level": "raid5f", 00:19:20.614 "superblock": true, 00:19:20.614 "num_base_bdevs": 4, 00:19:20.614 "num_base_bdevs_discovered": 3, 00:19:20.614 "num_base_bdevs_operational": 3, 00:19:20.614 "base_bdevs_list": [ 00:19:20.614 { 00:19:20.614 "name": null, 00:19:20.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.615 "is_configured": false, 00:19:20.615 "data_offset": 0, 00:19:20.615 "data_size": 63488 00:19:20.615 }, 00:19:20.615 { 00:19:20.615 "name": "BaseBdev2", 00:19:20.615 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:20.615 "is_configured": true, 00:19:20.615 "data_offset": 2048, 00:19:20.615 "data_size": 63488 00:19:20.615 }, 00:19:20.615 { 00:19:20.615 "name": "BaseBdev3", 00:19:20.615 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:20.615 "is_configured": true, 00:19:20.615 "data_offset": 2048, 00:19:20.615 "data_size": 63488 00:19:20.615 }, 00:19:20.615 { 00:19:20.615 "name": "BaseBdev4", 00:19:20.615 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:20.615 "is_configured": true, 00:19:20.615 "data_offset": 2048, 00:19:20.615 "data_size": 63488 00:19:20.615 } 00:19:20.615 ] 00:19:20.615 }' 00:19:20.615 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.874 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:20.874 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.874 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:20.874 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:20.874 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:19:20.874 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:20.874 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:20.874 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:20.874 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:20.874 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:20.874 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:20.874 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.874 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.874 [2024-11-27 04:36:17.286003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:20.874 [2024-11-27 04:36:17.286296] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:20.874 [2024-11-27 04:36:17.286321] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:20.874 request: 00:19:20.874 { 00:19:20.874 "base_bdev": "BaseBdev1", 00:19:20.874 "raid_bdev": "raid_bdev1", 00:19:20.874 "method": "bdev_raid_add_base_bdev", 00:19:20.874 "req_id": 1 00:19:20.874 } 00:19:20.874 Got JSON-RPC error response 00:19:20.874 response: 00:19:20.874 { 00:19:20.874 "code": -22, 00:19:20.874 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:20.874 } 00:19:20.874 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:20.874 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:19:20.874 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:20.874 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:20.874 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:20.874 04:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:21.811 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:21.811 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.811 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.812 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:21.812 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:21.812 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:21.812 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.812 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.812 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.812 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.812 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.812 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.812 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.812 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.812 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.812 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.812 "name": "raid_bdev1", 00:19:21.812 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:21.812 "strip_size_kb": 64, 00:19:21.812 "state": "online", 00:19:21.812 "raid_level": "raid5f", 00:19:21.812 "superblock": true, 00:19:21.812 "num_base_bdevs": 4, 00:19:21.812 "num_base_bdevs_discovered": 3, 00:19:21.812 "num_base_bdevs_operational": 3, 00:19:21.812 "base_bdevs_list": [ 00:19:21.812 { 00:19:21.812 "name": null, 00:19:21.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.812 "is_configured": false, 00:19:21.812 "data_offset": 0, 00:19:21.812 "data_size": 63488 00:19:21.812 }, 00:19:21.812 { 00:19:21.812 "name": "BaseBdev2", 00:19:21.812 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:21.812 "is_configured": true, 00:19:21.812 "data_offset": 2048, 00:19:21.812 "data_size": 63488 00:19:21.812 }, 00:19:21.812 { 00:19:21.812 "name": "BaseBdev3", 00:19:21.812 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:21.812 "is_configured": true, 00:19:21.812 "data_offset": 2048, 00:19:21.812 "data_size": 63488 00:19:21.812 }, 00:19:21.812 { 00:19:21.812 "name": "BaseBdev4", 00:19:21.812 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:21.812 "is_configured": true, 00:19:21.812 "data_offset": 2048, 00:19:21.812 "data_size": 63488 00:19:21.812 } 00:19:21.812 ] 00:19:21.812 }' 00:19:21.812 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.812 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.379 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:22.379 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.379 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:22.379 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:22.379 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.379 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.379 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.379 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.379 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.379 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.379 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.379 "name": "raid_bdev1", 00:19:22.379 "uuid": "782e6e3c-305d-499a-b734-f46f5317b319", 00:19:22.379 "strip_size_kb": 64, 00:19:22.379 "state": "online", 00:19:22.379 "raid_level": "raid5f", 00:19:22.379 "superblock": true, 00:19:22.379 "num_base_bdevs": 4, 00:19:22.379 "num_base_bdevs_discovered": 3, 00:19:22.379 "num_base_bdevs_operational": 3, 00:19:22.379 "base_bdevs_list": [ 00:19:22.379 { 00:19:22.379 "name": null, 00:19:22.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.379 "is_configured": false, 00:19:22.379 "data_offset": 0, 00:19:22.379 "data_size": 63488 00:19:22.379 }, 00:19:22.379 { 00:19:22.379 "name": "BaseBdev2", 00:19:22.379 "uuid": "0676fc83-2d1f-5c9c-a22f-0221970bb951", 00:19:22.379 "is_configured": true, 00:19:22.379 "data_offset": 2048, 00:19:22.379 "data_size": 63488 00:19:22.379 }, 00:19:22.379 { 00:19:22.379 "name": "BaseBdev3", 00:19:22.379 "uuid": "9454c866-1b21-52b4-9323-6be23fbd48fa", 00:19:22.379 "is_configured": true, 00:19:22.379 "data_offset": 2048, 00:19:22.379 "data_size": 63488 00:19:22.379 }, 00:19:22.379 { 00:19:22.379 "name": "BaseBdev4", 00:19:22.379 "uuid": "0637e042-efca-56d9-80ff-f1b7301dd84b", 00:19:22.379 "is_configured": true, 00:19:22.379 "data_offset": 2048, 00:19:22.379 "data_size": 63488 00:19:22.379 } 00:19:22.379 ] 00:19:22.379 }' 00:19:22.379 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.379 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:22.380 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.380 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:22.380 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85497 00:19:22.380 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85497 ']' 00:19:22.380 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85497 00:19:22.380 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:22.380 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.380 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85497 00:19:22.380 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:22.380 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:22.380 killing process with pid 85497 00:19:22.380 Received shutdown signal, test time was about 60.000000 seconds 00:19:22.380 00:19:22.380 Latency(us) 00:19:22.380 [2024-11-27T04:36:18.967Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.380 [2024-11-27T04:36:18.967Z] =================================================================================================================== 00:19:22.380 [2024-11-27T04:36:18.967Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:22.380 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85497' 00:19:22.380 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85497 00:19:22.380 [2024-11-27 04:36:18.921416] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:22.380 [2024-11-27 04:36:18.921555] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:22.380 04:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85497 00:19:22.380 [2024-11-27 04:36:18.921648] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:22.380 [2024-11-27 04:36:18.921662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:22.949 [2024-11-27 04:36:19.436472] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:24.372 ************************************ 00:19:24.372 END TEST raid5f_rebuild_test_sb 00:19:24.372 ************************************ 00:19:24.372 04:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:19:24.372 00:19:24.372 real 0m27.490s 00:19:24.372 user 0m34.542s 00:19:24.372 sys 0m3.162s 00:19:24.372 04:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.372 04:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.372 04:36:20 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:19:24.372 04:36:20 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:19:24.372 04:36:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:24.372 04:36:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.372 04:36:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:24.372 ************************************ 00:19:24.372 START TEST raid_state_function_test_sb_4k 00:19:24.372 ************************************ 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86323 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86323' 00:19:24.372 Process raid pid: 86323 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86323 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86323 ']' 00:19:24.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.372 04:36:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:24.372 [2024-11-27 04:36:20.730661] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:24.372 [2024-11-27 04:36:20.730776] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.372 [2024-11-27 04:36:20.907219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.630 [2024-11-27 04:36:21.023632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.888 [2024-11-27 04:36:21.233453] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:24.888 [2024-11-27 04:36:21.233578] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:25.146 04:36:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.146 04:36:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:19:25.146 04:36:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:25.146 04:36:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.146 04:36:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.146 [2024-11-27 04:36:21.570137] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:25.146 [2024-11-27 04:36:21.570258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:25.146 [2024-11-27 04:36:21.570304] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:25.146 [2024-11-27 04:36:21.570328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:25.146 04:36:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.146 04:36:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:25.146 04:36:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:25.146 04:36:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:25.146 04:36:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.146 04:36:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.146 04:36:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:25.146 04:36:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.146 04:36:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.146 04:36:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.146 04:36:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.146 04:36:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.146 04:36:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.146 04:36:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.146 04:36:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.146 04:36:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.146 04:36:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.146 "name": "Existed_Raid", 00:19:25.146 "uuid": "60743381-fd08-40d6-9e70-35d6bfe20856", 00:19:25.146 "strip_size_kb": 0, 00:19:25.146 "state": "configuring", 00:19:25.146 "raid_level": "raid1", 00:19:25.146 "superblock": true, 00:19:25.146 "num_base_bdevs": 2, 00:19:25.146 "num_base_bdevs_discovered": 0, 00:19:25.146 "num_base_bdevs_operational": 2, 00:19:25.146 "base_bdevs_list": [ 00:19:25.146 { 00:19:25.146 "name": "BaseBdev1", 00:19:25.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.146 "is_configured": false, 00:19:25.146 "data_offset": 0, 00:19:25.146 "data_size": 0 00:19:25.146 }, 00:19:25.146 { 00:19:25.146 "name": "BaseBdev2", 00:19:25.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.146 "is_configured": false, 00:19:25.146 "data_offset": 0, 00:19:25.146 "data_size": 0 00:19:25.146 } 00:19:25.146 ] 00:19:25.146 }' 00:19:25.146 04:36:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.146 04:36:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.714 [2024-11-27 04:36:22.049272] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:25.714 [2024-11-27 04:36:22.049363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.714 [2024-11-27 04:36:22.061264] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:25.714 [2024-11-27 04:36:22.061349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:25.714 [2024-11-27 04:36:22.061381] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:25.714 [2024-11-27 04:36:22.061410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.714 [2024-11-27 04:36:22.112704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:25.714 BaseBdev1 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.714 [ 00:19:25.714 { 00:19:25.714 "name": "BaseBdev1", 00:19:25.714 "aliases": [ 00:19:25.714 "95c4bd3f-6787-4df0-9029-a5575823a2e2" 00:19:25.714 ], 00:19:25.714 "product_name": "Malloc disk", 00:19:25.714 "block_size": 4096, 00:19:25.714 "num_blocks": 8192, 00:19:25.714 "uuid": "95c4bd3f-6787-4df0-9029-a5575823a2e2", 00:19:25.714 "assigned_rate_limits": { 00:19:25.714 "rw_ios_per_sec": 0, 00:19:25.714 "rw_mbytes_per_sec": 0, 00:19:25.714 "r_mbytes_per_sec": 0, 00:19:25.714 "w_mbytes_per_sec": 0 00:19:25.714 }, 00:19:25.714 "claimed": true, 00:19:25.714 "claim_type": "exclusive_write", 00:19:25.714 "zoned": false, 00:19:25.714 "supported_io_types": { 00:19:25.714 "read": true, 00:19:25.714 "write": true, 00:19:25.714 "unmap": true, 00:19:25.714 "flush": true, 00:19:25.714 "reset": true, 00:19:25.714 "nvme_admin": false, 00:19:25.714 "nvme_io": false, 00:19:25.714 "nvme_io_md": false, 00:19:25.714 "write_zeroes": true, 00:19:25.714 "zcopy": true, 00:19:25.714 "get_zone_info": false, 00:19:25.714 "zone_management": false, 00:19:25.714 "zone_append": false, 00:19:25.714 "compare": false, 00:19:25.714 "compare_and_write": false, 00:19:25.714 "abort": true, 00:19:25.714 "seek_hole": false, 00:19:25.714 "seek_data": false, 00:19:25.714 "copy": true, 00:19:25.714 "nvme_iov_md": false 00:19:25.714 }, 00:19:25.714 "memory_domains": [ 00:19:25.714 { 00:19:25.714 "dma_device_id": "system", 00:19:25.714 "dma_device_type": 1 00:19:25.714 }, 00:19:25.714 { 00:19:25.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.714 "dma_device_type": 2 00:19:25.714 } 00:19:25.714 ], 00:19:25.714 "driver_specific": {} 00:19:25.714 } 00:19:25.714 ] 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.714 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.715 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.715 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.715 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.715 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.715 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.715 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.715 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.715 "name": "Existed_Raid", 00:19:25.715 "uuid": "d34772a7-f7d3-4b0a-a397-0c9e0747c3b8", 00:19:25.715 "strip_size_kb": 0, 00:19:25.715 "state": "configuring", 00:19:25.715 "raid_level": "raid1", 00:19:25.715 "superblock": true, 00:19:25.715 "num_base_bdevs": 2, 00:19:25.715 "num_base_bdevs_discovered": 1, 00:19:25.715 "num_base_bdevs_operational": 2, 00:19:25.715 "base_bdevs_list": [ 00:19:25.715 { 00:19:25.715 "name": "BaseBdev1", 00:19:25.715 "uuid": "95c4bd3f-6787-4df0-9029-a5575823a2e2", 00:19:25.715 "is_configured": true, 00:19:25.715 "data_offset": 256, 00:19:25.715 "data_size": 7936 00:19:25.715 }, 00:19:25.715 { 00:19:25.715 "name": "BaseBdev2", 00:19:25.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.715 "is_configured": false, 00:19:25.715 "data_offset": 0, 00:19:25.715 "data_size": 0 00:19:25.715 } 00:19:25.715 ] 00:19:25.715 }' 00:19:25.715 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.715 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.282 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:26.282 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.282 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.282 [2024-11-27 04:36:22.580012] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:26.282 [2024-11-27 04:36:22.580181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:26.282 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.283 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:26.283 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.283 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.283 [2024-11-27 04:36:22.592036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:26.283 [2024-11-27 04:36:22.594126] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:26.283 [2024-11-27 04:36:22.594226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:26.283 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.283 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:26.283 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:26.283 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:26.283 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:26.283 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:26.283 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:26.283 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:26.283 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:26.283 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.283 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.283 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.283 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.283 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:26.283 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.283 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.283 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.283 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.283 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.283 "name": "Existed_Raid", 00:19:26.283 "uuid": "1db7f88c-0256-4f78-be93-5e862d37bffd", 00:19:26.283 "strip_size_kb": 0, 00:19:26.283 "state": "configuring", 00:19:26.283 "raid_level": "raid1", 00:19:26.283 "superblock": true, 00:19:26.283 "num_base_bdevs": 2, 00:19:26.283 "num_base_bdevs_discovered": 1, 00:19:26.283 "num_base_bdevs_operational": 2, 00:19:26.283 "base_bdevs_list": [ 00:19:26.283 { 00:19:26.283 "name": "BaseBdev1", 00:19:26.283 "uuid": "95c4bd3f-6787-4df0-9029-a5575823a2e2", 00:19:26.283 "is_configured": true, 00:19:26.283 "data_offset": 256, 00:19:26.283 "data_size": 7936 00:19:26.283 }, 00:19:26.283 { 00:19:26.283 "name": "BaseBdev2", 00:19:26.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.283 "is_configured": false, 00:19:26.283 "data_offset": 0, 00:19:26.283 "data_size": 0 00:19:26.283 } 00:19:26.283 ] 00:19:26.283 }' 00:19:26.283 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.283 04:36:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.543 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:19:26.543 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.543 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.543 [2024-11-27 04:36:23.098488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:26.543 [2024-11-27 04:36:23.098790] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:26.543 [2024-11-27 04:36:23.098807] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:26.543 BaseBdev2 00:19:26.543 [2024-11-27 04:36:23.099064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:26.543 [2024-11-27 04:36:23.099312] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:26.543 [2024-11-27 04:36:23.099332] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:26.543 [2024-11-27 04:36:23.099508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:26.543 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.543 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:26.543 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:26.543 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:26.543 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:19:26.543 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:26.543 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:26.543 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:26.543 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.543 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.543 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.544 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:26.544 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.544 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.544 [ 00:19:26.544 { 00:19:26.544 "name": "BaseBdev2", 00:19:26.544 "aliases": [ 00:19:26.544 "caa0aa57-47e3-4518-97f4-1eac781c4b7c" 00:19:26.544 ], 00:19:26.544 "product_name": "Malloc disk", 00:19:26.544 "block_size": 4096, 00:19:26.544 "num_blocks": 8192, 00:19:26.544 "uuid": "caa0aa57-47e3-4518-97f4-1eac781c4b7c", 00:19:26.544 "assigned_rate_limits": { 00:19:26.544 "rw_ios_per_sec": 0, 00:19:26.544 "rw_mbytes_per_sec": 0, 00:19:26.544 "r_mbytes_per_sec": 0, 00:19:26.544 "w_mbytes_per_sec": 0 00:19:26.544 }, 00:19:26.544 "claimed": true, 00:19:26.544 "claim_type": "exclusive_write", 00:19:26.544 "zoned": false, 00:19:26.544 "supported_io_types": { 00:19:26.544 "read": true, 00:19:26.544 "write": true, 00:19:26.544 "unmap": true, 00:19:26.544 "flush": true, 00:19:26.544 "reset": true, 00:19:26.544 "nvme_admin": false, 00:19:26.544 "nvme_io": false, 00:19:26.544 "nvme_io_md": false, 00:19:26.544 "write_zeroes": true, 00:19:26.544 "zcopy": true, 00:19:26.544 "get_zone_info": false, 00:19:26.544 "zone_management": false, 00:19:26.544 "zone_append": false, 00:19:26.544 "compare": false, 00:19:26.544 "compare_and_write": false, 00:19:26.544 "abort": true, 00:19:26.544 "seek_hole": false, 00:19:26.544 "seek_data": false, 00:19:26.544 "copy": true, 00:19:26.544 "nvme_iov_md": false 00:19:26.544 }, 00:19:26.544 "memory_domains": [ 00:19:26.544 { 00:19:26.544 "dma_device_id": "system", 00:19:26.544 "dma_device_type": 1 00:19:26.544 }, 00:19:26.544 { 00:19:26.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.544 "dma_device_type": 2 00:19:26.544 } 00:19:26.544 ], 00:19:26.544 "driver_specific": {} 00:19:26.544 } 00:19:26.544 ] 00:19:26.544 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.544 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:19:26.544 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:26.544 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:26.544 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:26.544 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:26.544 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:26.544 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:26.544 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:26.544 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:26.803 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.803 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.803 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.803 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.803 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.803 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:26.803 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.803 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.803 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.803 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.803 "name": "Existed_Raid", 00:19:26.803 "uuid": "1db7f88c-0256-4f78-be93-5e862d37bffd", 00:19:26.803 "strip_size_kb": 0, 00:19:26.803 "state": "online", 00:19:26.803 "raid_level": "raid1", 00:19:26.803 "superblock": true, 00:19:26.803 "num_base_bdevs": 2, 00:19:26.803 "num_base_bdevs_discovered": 2, 00:19:26.803 "num_base_bdevs_operational": 2, 00:19:26.803 "base_bdevs_list": [ 00:19:26.803 { 00:19:26.803 "name": "BaseBdev1", 00:19:26.803 "uuid": "95c4bd3f-6787-4df0-9029-a5575823a2e2", 00:19:26.803 "is_configured": true, 00:19:26.803 "data_offset": 256, 00:19:26.803 "data_size": 7936 00:19:26.803 }, 00:19:26.803 { 00:19:26.803 "name": "BaseBdev2", 00:19:26.803 "uuid": "caa0aa57-47e3-4518-97f4-1eac781c4b7c", 00:19:26.803 "is_configured": true, 00:19:26.803 "data_offset": 256, 00:19:26.803 "data_size": 7936 00:19:26.803 } 00:19:26.803 ] 00:19:26.803 }' 00:19:26.803 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.803 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.062 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:27.062 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:27.062 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:27.062 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:27.062 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:27.062 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:27.062 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:27.062 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.062 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.062 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:27.062 [2024-11-27 04:36:23.598034] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:27.062 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.062 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:27.062 "name": "Existed_Raid", 00:19:27.062 "aliases": [ 00:19:27.062 "1db7f88c-0256-4f78-be93-5e862d37bffd" 00:19:27.062 ], 00:19:27.062 "product_name": "Raid Volume", 00:19:27.062 "block_size": 4096, 00:19:27.062 "num_blocks": 7936, 00:19:27.062 "uuid": "1db7f88c-0256-4f78-be93-5e862d37bffd", 00:19:27.062 "assigned_rate_limits": { 00:19:27.062 "rw_ios_per_sec": 0, 00:19:27.062 "rw_mbytes_per_sec": 0, 00:19:27.062 "r_mbytes_per_sec": 0, 00:19:27.062 "w_mbytes_per_sec": 0 00:19:27.062 }, 00:19:27.062 "claimed": false, 00:19:27.062 "zoned": false, 00:19:27.062 "supported_io_types": { 00:19:27.062 "read": true, 00:19:27.062 "write": true, 00:19:27.062 "unmap": false, 00:19:27.062 "flush": false, 00:19:27.062 "reset": true, 00:19:27.062 "nvme_admin": false, 00:19:27.062 "nvme_io": false, 00:19:27.062 "nvme_io_md": false, 00:19:27.062 "write_zeroes": true, 00:19:27.062 "zcopy": false, 00:19:27.062 "get_zone_info": false, 00:19:27.062 "zone_management": false, 00:19:27.062 "zone_append": false, 00:19:27.062 "compare": false, 00:19:27.062 "compare_and_write": false, 00:19:27.062 "abort": false, 00:19:27.062 "seek_hole": false, 00:19:27.062 "seek_data": false, 00:19:27.062 "copy": false, 00:19:27.062 "nvme_iov_md": false 00:19:27.062 }, 00:19:27.062 "memory_domains": [ 00:19:27.062 { 00:19:27.062 "dma_device_id": "system", 00:19:27.062 "dma_device_type": 1 00:19:27.062 }, 00:19:27.062 { 00:19:27.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.062 "dma_device_type": 2 00:19:27.062 }, 00:19:27.062 { 00:19:27.062 "dma_device_id": "system", 00:19:27.062 "dma_device_type": 1 00:19:27.062 }, 00:19:27.062 { 00:19:27.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.062 "dma_device_type": 2 00:19:27.062 } 00:19:27.062 ], 00:19:27.062 "driver_specific": { 00:19:27.062 "raid": { 00:19:27.062 "uuid": "1db7f88c-0256-4f78-be93-5e862d37bffd", 00:19:27.062 "strip_size_kb": 0, 00:19:27.062 "state": "online", 00:19:27.062 "raid_level": "raid1", 00:19:27.062 "superblock": true, 00:19:27.062 "num_base_bdevs": 2, 00:19:27.062 "num_base_bdevs_discovered": 2, 00:19:27.062 "num_base_bdevs_operational": 2, 00:19:27.062 "base_bdevs_list": [ 00:19:27.062 { 00:19:27.062 "name": "BaseBdev1", 00:19:27.062 "uuid": "95c4bd3f-6787-4df0-9029-a5575823a2e2", 00:19:27.062 "is_configured": true, 00:19:27.062 "data_offset": 256, 00:19:27.062 "data_size": 7936 00:19:27.062 }, 00:19:27.062 { 00:19:27.062 "name": "BaseBdev2", 00:19:27.062 "uuid": "caa0aa57-47e3-4518-97f4-1eac781c4b7c", 00:19:27.062 "is_configured": true, 00:19:27.062 "data_offset": 256, 00:19:27.062 "data_size": 7936 00:19:27.062 } 00:19:27.062 ] 00:19:27.062 } 00:19:27.062 } 00:19:27.062 }' 00:19:27.062 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:27.321 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:27.321 BaseBdev2' 00:19:27.321 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:27.321 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:27.321 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:27.321 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:27.321 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.321 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.322 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:27.322 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.322 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:27.322 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:27.322 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:27.322 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:27.322 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:27.322 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.322 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.322 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.322 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:27.322 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:27.322 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:27.322 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.322 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.322 [2024-11-27 04:36:23.825388] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:27.581 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.581 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:27.581 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:27.581 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:27.581 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:19:27.581 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:27.581 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:27.581 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:27.581 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.581 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:27.581 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:27.581 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:27.581 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.581 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.581 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.581 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.581 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.581 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.581 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.581 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.581 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.581 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.581 "name": "Existed_Raid", 00:19:27.581 "uuid": "1db7f88c-0256-4f78-be93-5e862d37bffd", 00:19:27.581 "strip_size_kb": 0, 00:19:27.581 "state": "online", 00:19:27.581 "raid_level": "raid1", 00:19:27.581 "superblock": true, 00:19:27.581 "num_base_bdevs": 2, 00:19:27.581 "num_base_bdevs_discovered": 1, 00:19:27.581 "num_base_bdevs_operational": 1, 00:19:27.581 "base_bdevs_list": [ 00:19:27.581 { 00:19:27.581 "name": null, 00:19:27.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.581 "is_configured": false, 00:19:27.581 "data_offset": 0, 00:19:27.581 "data_size": 7936 00:19:27.581 }, 00:19:27.581 { 00:19:27.581 "name": "BaseBdev2", 00:19:27.581 "uuid": "caa0aa57-47e3-4518-97f4-1eac781c4b7c", 00:19:27.581 "is_configured": true, 00:19:27.581 "data_offset": 256, 00:19:27.581 "data_size": 7936 00:19:27.581 } 00:19:27.581 ] 00:19:27.581 }' 00:19:27.581 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.581 04:36:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.840 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:27.840 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:27.840 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:27.840 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.840 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.840 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.840 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.840 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:27.840 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:27.840 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:27.840 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.840 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.840 [2024-11-27 04:36:24.419582] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:27.840 [2024-11-27 04:36:24.419754] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:28.098 [2024-11-27 04:36:24.523842] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:28.098 [2024-11-27 04:36:24.524005] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:28.098 [2024-11-27 04:36:24.524058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:28.098 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.098 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:28.098 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:28.098 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.098 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.098 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:28.098 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:28.098 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.098 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:28.098 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:28.098 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:28.098 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86323 00:19:28.098 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86323 ']' 00:19:28.098 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86323 00:19:28.098 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:19:28.098 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.098 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86323 00:19:28.098 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:28.098 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:28.098 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86323' 00:19:28.098 killing process with pid 86323 00:19:28.098 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86323 00:19:28.098 [2024-11-27 04:36:24.624872] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:28.098 04:36:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86323 00:19:28.098 [2024-11-27 04:36:24.645160] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:29.477 04:36:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:19:29.477 00:19:29.477 real 0m5.267s 00:19:29.477 user 0m7.513s 00:19:29.477 sys 0m0.858s 00:19:29.477 04:36:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:29.477 04:36:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.477 ************************************ 00:19:29.477 END TEST raid_state_function_test_sb_4k 00:19:29.477 ************************************ 00:19:29.477 04:36:25 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:19:29.478 04:36:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:29.478 04:36:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:29.478 04:36:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:29.478 ************************************ 00:19:29.478 START TEST raid_superblock_test_4k 00:19:29.478 ************************************ 00:19:29.478 04:36:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:29.478 04:36:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:29.478 04:36:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:29.478 04:36:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:29.478 04:36:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:29.478 04:36:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:29.478 04:36:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:29.478 04:36:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:29.478 04:36:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:29.478 04:36:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:29.478 04:36:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:29.478 04:36:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:29.478 04:36:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:29.478 04:36:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:29.478 04:36:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:29.478 04:36:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:29.478 04:36:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86597 00:19:29.478 04:36:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86597 00:19:29.478 04:36:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86597 ']' 00:19:29.478 04:36:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.478 04:36:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.478 04:36:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.478 04:36:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.478 04:36:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:29.478 04:36:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.478 [2024-11-27 04:36:26.039819] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:29.478 [2024-11-27 04:36:26.039947] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86597 ] 00:19:29.737 [2024-11-27 04:36:26.219225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.997 [2024-11-27 04:36:26.339472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.997 [2024-11-27 04:36:26.544487] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:29.997 [2024-11-27 04:36:26.544552] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:30.564 04:36:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:30.564 04:36:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:19:30.564 04:36:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:30.564 04:36:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:30.564 04:36:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:30.564 04:36:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:30.564 04:36:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:30.565 04:36:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:30.565 04:36:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:30.565 04:36:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:30.565 04:36:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:19:30.565 04:36:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.565 04:36:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:30.565 malloc1 00:19:30.565 04:36:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.565 04:36:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:30.565 04:36:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.565 04:36:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:30.565 [2024-11-27 04:36:26.977354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:30.565 [2024-11-27 04:36:26.977436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:30.565 [2024-11-27 04:36:26.977459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:30.565 [2024-11-27 04:36:26.977470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:30.565 [2024-11-27 04:36:26.979824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:30.565 [2024-11-27 04:36:26.979862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:30.565 pt1 00:19:30.565 04:36:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.565 04:36:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:30.565 04:36:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:30.565 04:36:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:30.565 04:36:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:30.565 04:36:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:30.565 04:36:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:30.565 04:36:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:30.565 04:36:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:30.565 04:36:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:19:30.565 04:36:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.565 04:36:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:30.565 malloc2 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:30.565 [2024-11-27 04:36:27.036180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:30.565 [2024-11-27 04:36:27.036260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:30.565 [2024-11-27 04:36:27.036301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:30.565 [2024-11-27 04:36:27.036316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:30.565 [2024-11-27 04:36:27.038814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:30.565 [2024-11-27 04:36:27.038858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:30.565 pt2 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:30.565 [2024-11-27 04:36:27.048178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:30.565 [2024-11-27 04:36:27.050171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:30.565 [2024-11-27 04:36:27.050363] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:30.565 [2024-11-27 04:36:27.050390] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:30.565 [2024-11-27 04:36:27.050665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:30.565 [2024-11-27 04:36:27.050858] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:30.565 [2024-11-27 04:36:27.050882] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:30.565 [2024-11-27 04:36:27.051053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.565 "name": "raid_bdev1", 00:19:30.565 "uuid": "528201bf-12de-45ea-b782-93e9a4382e9a", 00:19:30.565 "strip_size_kb": 0, 00:19:30.565 "state": "online", 00:19:30.565 "raid_level": "raid1", 00:19:30.565 "superblock": true, 00:19:30.565 "num_base_bdevs": 2, 00:19:30.565 "num_base_bdevs_discovered": 2, 00:19:30.565 "num_base_bdevs_operational": 2, 00:19:30.565 "base_bdevs_list": [ 00:19:30.565 { 00:19:30.565 "name": "pt1", 00:19:30.565 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:30.565 "is_configured": true, 00:19:30.565 "data_offset": 256, 00:19:30.565 "data_size": 7936 00:19:30.565 }, 00:19:30.565 { 00:19:30.565 "name": "pt2", 00:19:30.565 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:30.565 "is_configured": true, 00:19:30.565 "data_offset": 256, 00:19:30.565 "data_size": 7936 00:19:30.565 } 00:19:30.565 ] 00:19:30.565 }' 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.565 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.131 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:31.131 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:31.131 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:31.131 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:31.131 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:31.131 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:31.131 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:31.131 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:31.131 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.131 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.131 [2024-11-27 04:36:27.547646] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:31.131 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.132 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:31.132 "name": "raid_bdev1", 00:19:31.132 "aliases": [ 00:19:31.132 "528201bf-12de-45ea-b782-93e9a4382e9a" 00:19:31.132 ], 00:19:31.132 "product_name": "Raid Volume", 00:19:31.132 "block_size": 4096, 00:19:31.132 "num_blocks": 7936, 00:19:31.132 "uuid": "528201bf-12de-45ea-b782-93e9a4382e9a", 00:19:31.132 "assigned_rate_limits": { 00:19:31.132 "rw_ios_per_sec": 0, 00:19:31.132 "rw_mbytes_per_sec": 0, 00:19:31.132 "r_mbytes_per_sec": 0, 00:19:31.132 "w_mbytes_per_sec": 0 00:19:31.132 }, 00:19:31.132 "claimed": false, 00:19:31.132 "zoned": false, 00:19:31.132 "supported_io_types": { 00:19:31.132 "read": true, 00:19:31.132 "write": true, 00:19:31.132 "unmap": false, 00:19:31.132 "flush": false, 00:19:31.132 "reset": true, 00:19:31.132 "nvme_admin": false, 00:19:31.132 "nvme_io": false, 00:19:31.132 "nvme_io_md": false, 00:19:31.132 "write_zeroes": true, 00:19:31.132 "zcopy": false, 00:19:31.132 "get_zone_info": false, 00:19:31.132 "zone_management": false, 00:19:31.132 "zone_append": false, 00:19:31.132 "compare": false, 00:19:31.132 "compare_and_write": false, 00:19:31.132 "abort": false, 00:19:31.132 "seek_hole": false, 00:19:31.132 "seek_data": false, 00:19:31.132 "copy": false, 00:19:31.132 "nvme_iov_md": false 00:19:31.132 }, 00:19:31.132 "memory_domains": [ 00:19:31.132 { 00:19:31.132 "dma_device_id": "system", 00:19:31.132 "dma_device_type": 1 00:19:31.132 }, 00:19:31.132 { 00:19:31.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.132 "dma_device_type": 2 00:19:31.132 }, 00:19:31.132 { 00:19:31.132 "dma_device_id": "system", 00:19:31.132 "dma_device_type": 1 00:19:31.132 }, 00:19:31.132 { 00:19:31.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.132 "dma_device_type": 2 00:19:31.132 } 00:19:31.132 ], 00:19:31.132 "driver_specific": { 00:19:31.132 "raid": { 00:19:31.132 "uuid": "528201bf-12de-45ea-b782-93e9a4382e9a", 00:19:31.132 "strip_size_kb": 0, 00:19:31.132 "state": "online", 00:19:31.132 "raid_level": "raid1", 00:19:31.132 "superblock": true, 00:19:31.132 "num_base_bdevs": 2, 00:19:31.132 "num_base_bdevs_discovered": 2, 00:19:31.132 "num_base_bdevs_operational": 2, 00:19:31.132 "base_bdevs_list": [ 00:19:31.132 { 00:19:31.132 "name": "pt1", 00:19:31.132 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:31.132 "is_configured": true, 00:19:31.132 "data_offset": 256, 00:19:31.132 "data_size": 7936 00:19:31.132 }, 00:19:31.132 { 00:19:31.132 "name": "pt2", 00:19:31.132 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:31.132 "is_configured": true, 00:19:31.132 "data_offset": 256, 00:19:31.132 "data_size": 7936 00:19:31.132 } 00:19:31.132 ] 00:19:31.132 } 00:19:31.132 } 00:19:31.132 }' 00:19:31.132 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:31.132 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:31.132 pt2' 00:19:31.132 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:31.132 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:31.132 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:31.132 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:31.132 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:31.132 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.132 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.132 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.132 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:31.132 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:31.132 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:31.132 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:31.132 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:31.132 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.132 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.390 [2024-11-27 04:36:27.751293] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=528201bf-12de-45ea-b782-93e9a4382e9a 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 528201bf-12de-45ea-b782-93e9a4382e9a ']' 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.390 [2024-11-27 04:36:27.794859] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:31.390 [2024-11-27 04:36:27.794893] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:31.390 [2024-11-27 04:36:27.794984] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:31.390 [2024-11-27 04:36:27.795047] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:31.390 [2024-11-27 04:36:27.795061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.390 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.391 [2024-11-27 04:36:27.946711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:31.391 [2024-11-27 04:36:27.949494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:31.391 [2024-11-27 04:36:27.949604] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:31.391 [2024-11-27 04:36:27.949688] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:31.391 [2024-11-27 04:36:27.949712] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:31.391 [2024-11-27 04:36:27.949730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:31.391 request: 00:19:31.391 { 00:19:31.391 "name": "raid_bdev1", 00:19:31.391 "raid_level": "raid1", 00:19:31.391 "base_bdevs": [ 00:19:31.391 "malloc1", 00:19:31.391 "malloc2" 00:19:31.391 ], 00:19:31.391 "superblock": false, 00:19:31.391 "method": "bdev_raid_create", 00:19:31.391 "req_id": 1 00:19:31.391 } 00:19:31.391 Got JSON-RPC error response 00:19:31.391 response: 00:19:31.391 { 00:19:31.391 "code": -17, 00:19:31.391 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:31.391 } 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.391 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.649 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:31.649 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:31.649 04:36:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:31.649 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.649 04:36:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.649 [2024-11-27 04:36:27.998560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:31.649 [2024-11-27 04:36:27.998629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.649 [2024-11-27 04:36:27.998650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:31.649 [2024-11-27 04:36:27.998662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.649 [2024-11-27 04:36:28.001206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.649 [2024-11-27 04:36:28.001250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:31.649 [2024-11-27 04:36:28.001368] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:31.649 [2024-11-27 04:36:28.001438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:31.649 pt1 00:19:31.649 04:36:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.649 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:31.649 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:31.649 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:31.649 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:31.649 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:31.649 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:31.649 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.649 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.649 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.649 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.649 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.649 04:36:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.649 04:36:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.649 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.649 04:36:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.649 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.649 "name": "raid_bdev1", 00:19:31.649 "uuid": "528201bf-12de-45ea-b782-93e9a4382e9a", 00:19:31.649 "strip_size_kb": 0, 00:19:31.649 "state": "configuring", 00:19:31.649 "raid_level": "raid1", 00:19:31.649 "superblock": true, 00:19:31.649 "num_base_bdevs": 2, 00:19:31.649 "num_base_bdevs_discovered": 1, 00:19:31.649 "num_base_bdevs_operational": 2, 00:19:31.649 "base_bdevs_list": [ 00:19:31.649 { 00:19:31.649 "name": "pt1", 00:19:31.649 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:31.649 "is_configured": true, 00:19:31.649 "data_offset": 256, 00:19:31.649 "data_size": 7936 00:19:31.649 }, 00:19:31.649 { 00:19:31.649 "name": null, 00:19:31.649 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:31.649 "is_configured": false, 00:19:31.649 "data_offset": 256, 00:19:31.649 "data_size": 7936 00:19:31.649 } 00:19:31.649 ] 00:19:31.649 }' 00:19:31.649 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.649 04:36:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.908 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:31.908 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:31.908 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:31.908 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:31.908 04:36:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.908 04:36:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.908 [2024-11-27 04:36:28.453827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:31.908 [2024-11-27 04:36:28.453913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.908 [2024-11-27 04:36:28.453937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:31.908 [2024-11-27 04:36:28.453950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.908 [2024-11-27 04:36:28.454462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.908 [2024-11-27 04:36:28.454495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:31.908 [2024-11-27 04:36:28.454584] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:31.908 [2024-11-27 04:36:28.454619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:31.908 [2024-11-27 04:36:28.454754] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:31.908 [2024-11-27 04:36:28.454774] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:31.908 [2024-11-27 04:36:28.455042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:31.908 [2024-11-27 04:36:28.455239] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:31.908 [2024-11-27 04:36:28.455257] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:31.908 [2024-11-27 04:36:28.455422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.908 pt2 00:19:31.908 04:36:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.908 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:31.908 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:31.908 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:31.908 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:31.908 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:31.908 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:31.908 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:31.908 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:31.908 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.908 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.908 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.908 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.908 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.908 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.908 04:36:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.908 04:36:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.908 04:36:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.166 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.166 "name": "raid_bdev1", 00:19:32.166 "uuid": "528201bf-12de-45ea-b782-93e9a4382e9a", 00:19:32.166 "strip_size_kb": 0, 00:19:32.166 "state": "online", 00:19:32.166 "raid_level": "raid1", 00:19:32.166 "superblock": true, 00:19:32.166 "num_base_bdevs": 2, 00:19:32.166 "num_base_bdevs_discovered": 2, 00:19:32.166 "num_base_bdevs_operational": 2, 00:19:32.166 "base_bdevs_list": [ 00:19:32.166 { 00:19:32.166 "name": "pt1", 00:19:32.166 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:32.166 "is_configured": true, 00:19:32.166 "data_offset": 256, 00:19:32.166 "data_size": 7936 00:19:32.166 }, 00:19:32.166 { 00:19:32.166 "name": "pt2", 00:19:32.166 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:32.166 "is_configured": true, 00:19:32.166 "data_offset": 256, 00:19:32.166 "data_size": 7936 00:19:32.166 } 00:19:32.166 ] 00:19:32.166 }' 00:19:32.166 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.166 04:36:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:32.425 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:32.425 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:32.425 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:32.425 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:32.425 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:32.425 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:32.425 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:32.425 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:32.425 04:36:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.425 04:36:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:32.425 [2024-11-27 04:36:28.873425] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:32.425 04:36:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.425 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:32.425 "name": "raid_bdev1", 00:19:32.425 "aliases": [ 00:19:32.425 "528201bf-12de-45ea-b782-93e9a4382e9a" 00:19:32.425 ], 00:19:32.425 "product_name": "Raid Volume", 00:19:32.425 "block_size": 4096, 00:19:32.425 "num_blocks": 7936, 00:19:32.425 "uuid": "528201bf-12de-45ea-b782-93e9a4382e9a", 00:19:32.425 "assigned_rate_limits": { 00:19:32.425 "rw_ios_per_sec": 0, 00:19:32.425 "rw_mbytes_per_sec": 0, 00:19:32.425 "r_mbytes_per_sec": 0, 00:19:32.425 "w_mbytes_per_sec": 0 00:19:32.425 }, 00:19:32.425 "claimed": false, 00:19:32.425 "zoned": false, 00:19:32.425 "supported_io_types": { 00:19:32.425 "read": true, 00:19:32.425 "write": true, 00:19:32.425 "unmap": false, 00:19:32.425 "flush": false, 00:19:32.425 "reset": true, 00:19:32.425 "nvme_admin": false, 00:19:32.425 "nvme_io": false, 00:19:32.425 "nvme_io_md": false, 00:19:32.425 "write_zeroes": true, 00:19:32.425 "zcopy": false, 00:19:32.425 "get_zone_info": false, 00:19:32.425 "zone_management": false, 00:19:32.425 "zone_append": false, 00:19:32.425 "compare": false, 00:19:32.425 "compare_and_write": false, 00:19:32.425 "abort": false, 00:19:32.425 "seek_hole": false, 00:19:32.425 "seek_data": false, 00:19:32.425 "copy": false, 00:19:32.425 "nvme_iov_md": false 00:19:32.425 }, 00:19:32.425 "memory_domains": [ 00:19:32.425 { 00:19:32.425 "dma_device_id": "system", 00:19:32.425 "dma_device_type": 1 00:19:32.425 }, 00:19:32.425 { 00:19:32.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.425 "dma_device_type": 2 00:19:32.425 }, 00:19:32.425 { 00:19:32.425 "dma_device_id": "system", 00:19:32.425 "dma_device_type": 1 00:19:32.425 }, 00:19:32.425 { 00:19:32.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.425 "dma_device_type": 2 00:19:32.425 } 00:19:32.425 ], 00:19:32.425 "driver_specific": { 00:19:32.425 "raid": { 00:19:32.425 "uuid": "528201bf-12de-45ea-b782-93e9a4382e9a", 00:19:32.425 "strip_size_kb": 0, 00:19:32.425 "state": "online", 00:19:32.425 "raid_level": "raid1", 00:19:32.425 "superblock": true, 00:19:32.425 "num_base_bdevs": 2, 00:19:32.425 "num_base_bdevs_discovered": 2, 00:19:32.425 "num_base_bdevs_operational": 2, 00:19:32.425 "base_bdevs_list": [ 00:19:32.425 { 00:19:32.425 "name": "pt1", 00:19:32.425 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:32.425 "is_configured": true, 00:19:32.425 "data_offset": 256, 00:19:32.425 "data_size": 7936 00:19:32.425 }, 00:19:32.425 { 00:19:32.425 "name": "pt2", 00:19:32.426 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:32.426 "is_configured": true, 00:19:32.426 "data_offset": 256, 00:19:32.426 "data_size": 7936 00:19:32.426 } 00:19:32.426 ] 00:19:32.426 } 00:19:32.426 } 00:19:32.426 }' 00:19:32.426 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:32.426 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:32.426 pt2' 00:19:32.426 04:36:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.685 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:32.685 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:32.685 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:32.685 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.685 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.685 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:32.685 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.685 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:32.685 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:32.685 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:32.685 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:32.685 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.685 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:32.685 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.685 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.685 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:32.685 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:32.685 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:32.686 [2024-11-27 04:36:29.128922] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 528201bf-12de-45ea-b782-93e9a4382e9a '!=' 528201bf-12de-45ea-b782-93e9a4382e9a ']' 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:32.686 [2024-11-27 04:36:29.172665] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.686 "name": "raid_bdev1", 00:19:32.686 "uuid": "528201bf-12de-45ea-b782-93e9a4382e9a", 00:19:32.686 "strip_size_kb": 0, 00:19:32.686 "state": "online", 00:19:32.686 "raid_level": "raid1", 00:19:32.686 "superblock": true, 00:19:32.686 "num_base_bdevs": 2, 00:19:32.686 "num_base_bdevs_discovered": 1, 00:19:32.686 "num_base_bdevs_operational": 1, 00:19:32.686 "base_bdevs_list": [ 00:19:32.686 { 00:19:32.686 "name": null, 00:19:32.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.686 "is_configured": false, 00:19:32.686 "data_offset": 0, 00:19:32.686 "data_size": 7936 00:19:32.686 }, 00:19:32.686 { 00:19:32.686 "name": "pt2", 00:19:32.686 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:32.686 "is_configured": true, 00:19:32.686 "data_offset": 256, 00:19:32.686 "data_size": 7936 00:19:32.686 } 00:19:32.686 ] 00:19:32.686 }' 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.686 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:33.253 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:33.253 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.253 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:33.253 [2024-11-27 04:36:29.607872] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:33.253 [2024-11-27 04:36:29.607911] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:33.253 [2024-11-27 04:36:29.608006] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:33.253 [2024-11-27 04:36:29.608060] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:33.253 [2024-11-27 04:36:29.608073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:33.253 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.253 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.253 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.253 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:33.253 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:33.253 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.253 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:33.253 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:33.253 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:33.253 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:33.253 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:33.253 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.253 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:33.253 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.253 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:33.253 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:33.253 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:33.253 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:33.253 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:19:33.253 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:33.253 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.253 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:33.253 [2024-11-27 04:36:29.683751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:33.253 [2024-11-27 04:36:29.683830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.253 [2024-11-27 04:36:29.683850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:33.253 [2024-11-27 04:36:29.683862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.253 [2024-11-27 04:36:29.686398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.253 [2024-11-27 04:36:29.686442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:33.253 [2024-11-27 04:36:29.686535] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:33.253 [2024-11-27 04:36:29.686589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:33.253 [2024-11-27 04:36:29.686731] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:33.253 [2024-11-27 04:36:29.686762] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:33.253 [2024-11-27 04:36:29.687040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:33.254 [2024-11-27 04:36:29.687230] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:33.254 [2024-11-27 04:36:29.687249] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:33.254 [2024-11-27 04:36:29.687405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.254 pt2 00:19:33.254 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.254 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:33.254 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:33.254 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:33.254 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:33.254 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:33.254 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:33.254 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.254 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.254 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.254 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.254 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.254 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.254 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.254 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:33.254 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.254 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.254 "name": "raid_bdev1", 00:19:33.254 "uuid": "528201bf-12de-45ea-b782-93e9a4382e9a", 00:19:33.254 "strip_size_kb": 0, 00:19:33.254 "state": "online", 00:19:33.254 "raid_level": "raid1", 00:19:33.254 "superblock": true, 00:19:33.254 "num_base_bdevs": 2, 00:19:33.254 "num_base_bdevs_discovered": 1, 00:19:33.254 "num_base_bdevs_operational": 1, 00:19:33.254 "base_bdevs_list": [ 00:19:33.254 { 00:19:33.254 "name": null, 00:19:33.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.254 "is_configured": false, 00:19:33.254 "data_offset": 256, 00:19:33.254 "data_size": 7936 00:19:33.254 }, 00:19:33.254 { 00:19:33.254 "name": "pt2", 00:19:33.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:33.254 "is_configured": true, 00:19:33.254 "data_offset": 256, 00:19:33.254 "data_size": 7936 00:19:33.254 } 00:19:33.254 ] 00:19:33.254 }' 00:19:33.254 04:36:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.254 04:36:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:33.822 [2024-11-27 04:36:30.154931] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:33.822 [2024-11-27 04:36:30.154971] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:33.822 [2024-11-27 04:36:30.155060] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:33.822 [2024-11-27 04:36:30.155129] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:33.822 [2024-11-27 04:36:30.155143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:33.822 [2024-11-27 04:36:30.214848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:33.822 [2024-11-27 04:36:30.214921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.822 [2024-11-27 04:36:30.214940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:33.822 [2024-11-27 04:36:30.214949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.822 [2024-11-27 04:36:30.217300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.822 [2024-11-27 04:36:30.217337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:33.822 [2024-11-27 04:36:30.217432] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:33.822 [2024-11-27 04:36:30.217478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:33.822 [2024-11-27 04:36:30.217649] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:33.822 [2024-11-27 04:36:30.217667] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:33.822 [2024-11-27 04:36:30.217684] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:33.822 [2024-11-27 04:36:30.217752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:33.822 [2024-11-27 04:36:30.217831] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:33.822 [2024-11-27 04:36:30.217840] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:33.822 [2024-11-27 04:36:30.218127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:33.822 [2024-11-27 04:36:30.218302] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:33.822 [2024-11-27 04:36:30.218324] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:33.822 [2024-11-27 04:36:30.218500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.822 pt1 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.822 "name": "raid_bdev1", 00:19:33.822 "uuid": "528201bf-12de-45ea-b782-93e9a4382e9a", 00:19:33.822 "strip_size_kb": 0, 00:19:33.822 "state": "online", 00:19:33.822 "raid_level": "raid1", 00:19:33.822 "superblock": true, 00:19:33.822 "num_base_bdevs": 2, 00:19:33.822 "num_base_bdevs_discovered": 1, 00:19:33.822 "num_base_bdevs_operational": 1, 00:19:33.822 "base_bdevs_list": [ 00:19:33.822 { 00:19:33.822 "name": null, 00:19:33.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.822 "is_configured": false, 00:19:33.822 "data_offset": 256, 00:19:33.822 "data_size": 7936 00:19:33.822 }, 00:19:33.822 { 00:19:33.822 "name": "pt2", 00:19:33.822 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:33.822 "is_configured": true, 00:19:33.822 "data_offset": 256, 00:19:33.822 "data_size": 7936 00:19:33.822 } 00:19:33.822 ] 00:19:33.822 }' 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.822 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:34.468 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:34.468 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.468 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:34.468 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:34.468 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.468 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:34.468 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:34.468 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:34.468 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.468 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:34.468 [2024-11-27 04:36:30.738246] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:34.468 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.468 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 528201bf-12de-45ea-b782-93e9a4382e9a '!=' 528201bf-12de-45ea-b782-93e9a4382e9a ']' 00:19:34.468 04:36:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86597 00:19:34.468 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86597 ']' 00:19:34.468 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86597 00:19:34.468 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:19:34.468 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.468 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86597 00:19:34.468 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:34.468 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:34.468 killing process with pid 86597 00:19:34.468 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86597' 00:19:34.468 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86597 00:19:34.468 [2024-11-27 04:36:30.804819] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:34.468 [2024-11-27 04:36:30.804929] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:34.468 [2024-11-27 04:36:30.804986] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:34.468 [2024-11-27 04:36:30.805004] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:34.468 04:36:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86597 00:19:34.727 [2024-11-27 04:36:31.031269] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:36.107 04:36:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:19:36.107 00:19:36.107 real 0m6.306s 00:19:36.107 user 0m9.528s 00:19:36.107 sys 0m1.103s 00:19:36.107 04:36:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:36.107 04:36:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.107 ************************************ 00:19:36.107 END TEST raid_superblock_test_4k 00:19:36.107 ************************************ 00:19:36.107 04:36:32 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:19:36.107 04:36:32 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:19:36.107 04:36:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:36.107 04:36:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:36.107 04:36:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:36.107 ************************************ 00:19:36.107 START TEST raid_rebuild_test_sb_4k 00:19:36.107 ************************************ 00:19:36.107 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:19:36.107 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:36.107 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:36.107 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86928 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86928 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86928 ']' 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:36.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:36.108 04:36:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.108 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:36.108 Zero copy mechanism will not be used. 00:19:36.108 [2024-11-27 04:36:32.409260] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:36.108 [2024-11-27 04:36:32.409404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86928 ] 00:19:36.108 [2024-11-27 04:36:32.585058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.368 [2024-11-27 04:36:32.706530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.368 [2024-11-27 04:36:32.914571] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:36.368 [2024-11-27 04:36:32.914641] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:36.937 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:36.937 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:19:36.937 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:36.937 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:19:36.937 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.937 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.937 BaseBdev1_malloc 00:19:36.937 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.937 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:36.937 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.937 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.937 [2024-11-27 04:36:33.302965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:36.937 [2024-11-27 04:36:33.303055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.937 [2024-11-27 04:36:33.303081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:36.937 [2024-11-27 04:36:33.303094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.937 [2024-11-27 04:36:33.305424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.937 [2024-11-27 04:36:33.305468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:36.937 BaseBdev1 00:19:36.937 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.937 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:36.937 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:19:36.937 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.937 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.937 BaseBdev2_malloc 00:19:36.937 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.937 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:36.937 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.937 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.937 [2024-11-27 04:36:33.362451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:36.937 [2024-11-27 04:36:33.362525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.938 [2024-11-27 04:36:33.362552] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:36.938 [2024-11-27 04:36:33.362565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.938 [2024-11-27 04:36:33.365047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.938 [2024-11-27 04:36:33.365109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:36.938 BaseBdev2 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.938 spare_malloc 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.938 spare_delay 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.938 [2024-11-27 04:36:33.445753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:36.938 [2024-11-27 04:36:33.445823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.938 [2024-11-27 04:36:33.445847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:36.938 [2024-11-27 04:36:33.445859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.938 [2024-11-27 04:36:33.448287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.938 [2024-11-27 04:36:33.448332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:36.938 spare 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.938 [2024-11-27 04:36:33.457779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:36.938 [2024-11-27 04:36:33.459648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:36.938 [2024-11-27 04:36:33.459860] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:36.938 [2024-11-27 04:36:33.459878] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:36.938 [2024-11-27 04:36:33.460158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:36.938 [2024-11-27 04:36:33.460356] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:36.938 [2024-11-27 04:36:33.460374] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:36.938 [2024-11-27 04:36:33.460559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.938 "name": "raid_bdev1", 00:19:36.938 "uuid": "a20cabcc-b38b-49f9-b9d6-33ce591999ab", 00:19:36.938 "strip_size_kb": 0, 00:19:36.938 "state": "online", 00:19:36.938 "raid_level": "raid1", 00:19:36.938 "superblock": true, 00:19:36.938 "num_base_bdevs": 2, 00:19:36.938 "num_base_bdevs_discovered": 2, 00:19:36.938 "num_base_bdevs_operational": 2, 00:19:36.938 "base_bdevs_list": [ 00:19:36.938 { 00:19:36.938 "name": "BaseBdev1", 00:19:36.938 "uuid": "68a8ee0b-f934-5aa3-97fd-4fd9f32bffaf", 00:19:36.938 "is_configured": true, 00:19:36.938 "data_offset": 256, 00:19:36.938 "data_size": 7936 00:19:36.938 }, 00:19:36.938 { 00:19:36.938 "name": "BaseBdev2", 00:19:36.938 "uuid": "01c70669-b49f-508f-9eab-6fcd4a883955", 00:19:36.938 "is_configured": true, 00:19:36.938 "data_offset": 256, 00:19:36.938 "data_size": 7936 00:19:36.938 } 00:19:36.938 ] 00:19:36.938 }' 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.938 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.506 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:37.506 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:37.506 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.506 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.506 [2024-11-27 04:36:33.913370] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:37.506 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.506 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:37.506 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.506 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:37.506 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.506 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.506 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.506 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:37.506 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:37.506 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:37.506 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:37.506 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:37.506 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:37.506 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:37.506 04:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:37.506 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:37.506 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:37.506 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:37.506 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:37.506 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:37.506 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:37.765 [2024-11-27 04:36:34.192609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:37.765 /dev/nbd0 00:19:37.765 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:37.765 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:37.765 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:37.765 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:37.765 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:37.765 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:37.765 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:37.765 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:37.765 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:37.765 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:37.765 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:37.765 1+0 records in 00:19:37.765 1+0 records out 00:19:37.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334217 s, 12.3 MB/s 00:19:37.765 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:37.765 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:37.765 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:37.765 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:37.765 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:37.765 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:37.765 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:37.765 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:37.765 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:37.765 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:38.421 7936+0 records in 00:19:38.421 7936+0 records out 00:19:38.421 32505856 bytes (33 MB, 31 MiB) copied, 0.605241 s, 53.7 MB/s 00:19:38.421 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:38.421 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:38.421 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:38.421 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:38.421 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:38.421 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:38.421 04:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:38.681 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:38.681 [2024-11-27 04:36:35.108250] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.681 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:38.681 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:38.681 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:38.681 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:38.681 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:38.681 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:38.681 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:38.682 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:38.682 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.682 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:38.682 [2024-11-27 04:36:35.124344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:38.682 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.682 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:38.682 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.682 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.682 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:38.682 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:38.682 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:38.682 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.682 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.682 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.682 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.682 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.682 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.682 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:38.682 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.682 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.682 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.682 "name": "raid_bdev1", 00:19:38.682 "uuid": "a20cabcc-b38b-49f9-b9d6-33ce591999ab", 00:19:38.682 "strip_size_kb": 0, 00:19:38.682 "state": "online", 00:19:38.682 "raid_level": "raid1", 00:19:38.682 "superblock": true, 00:19:38.682 "num_base_bdevs": 2, 00:19:38.682 "num_base_bdevs_discovered": 1, 00:19:38.682 "num_base_bdevs_operational": 1, 00:19:38.682 "base_bdevs_list": [ 00:19:38.682 { 00:19:38.682 "name": null, 00:19:38.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.682 "is_configured": false, 00:19:38.682 "data_offset": 0, 00:19:38.682 "data_size": 7936 00:19:38.682 }, 00:19:38.682 { 00:19:38.682 "name": "BaseBdev2", 00:19:38.682 "uuid": "01c70669-b49f-508f-9eab-6fcd4a883955", 00:19:38.682 "is_configured": true, 00:19:38.682 "data_offset": 256, 00:19:38.682 "data_size": 7936 00:19:38.682 } 00:19:38.682 ] 00:19:38.682 }' 00:19:38.682 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.682 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:39.250 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:39.250 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.250 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:39.250 [2024-11-27 04:36:35.595654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:39.250 [2024-11-27 04:36:35.615784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:39.250 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.250 04:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:39.250 [2024-11-27 04:36:35.617904] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:40.188 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:40.188 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:40.188 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:40.188 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:40.188 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:40.189 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.189 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.189 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.189 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:40.189 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.189 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:40.189 "name": "raid_bdev1", 00:19:40.189 "uuid": "a20cabcc-b38b-49f9-b9d6-33ce591999ab", 00:19:40.189 "strip_size_kb": 0, 00:19:40.189 "state": "online", 00:19:40.189 "raid_level": "raid1", 00:19:40.189 "superblock": true, 00:19:40.189 "num_base_bdevs": 2, 00:19:40.189 "num_base_bdevs_discovered": 2, 00:19:40.189 "num_base_bdevs_operational": 2, 00:19:40.189 "process": { 00:19:40.189 "type": "rebuild", 00:19:40.189 "target": "spare", 00:19:40.189 "progress": { 00:19:40.189 "blocks": 2560, 00:19:40.189 "percent": 32 00:19:40.189 } 00:19:40.189 }, 00:19:40.189 "base_bdevs_list": [ 00:19:40.189 { 00:19:40.189 "name": "spare", 00:19:40.189 "uuid": "faf33a5b-75fc-5081-9399-b164d55def55", 00:19:40.189 "is_configured": true, 00:19:40.189 "data_offset": 256, 00:19:40.189 "data_size": 7936 00:19:40.189 }, 00:19:40.189 { 00:19:40.189 "name": "BaseBdev2", 00:19:40.189 "uuid": "01c70669-b49f-508f-9eab-6fcd4a883955", 00:19:40.189 "is_configured": true, 00:19:40.189 "data_offset": 256, 00:19:40.189 "data_size": 7936 00:19:40.189 } 00:19:40.189 ] 00:19:40.189 }' 00:19:40.189 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:40.189 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:40.189 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:40.189 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:40.189 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:40.189 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.189 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:40.189 [2024-11-27 04:36:36.760683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:40.448 [2024-11-27 04:36:36.823614] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:40.448 [2024-11-27 04:36:36.823715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.448 [2024-11-27 04:36:36.823732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:40.448 [2024-11-27 04:36:36.823742] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:40.448 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.448 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:40.448 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:40.448 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.448 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:40.448 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:40.448 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:40.448 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.448 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.448 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.448 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.448 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.448 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.448 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.448 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:40.448 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.448 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.448 "name": "raid_bdev1", 00:19:40.448 "uuid": "a20cabcc-b38b-49f9-b9d6-33ce591999ab", 00:19:40.448 "strip_size_kb": 0, 00:19:40.448 "state": "online", 00:19:40.448 "raid_level": "raid1", 00:19:40.448 "superblock": true, 00:19:40.448 "num_base_bdevs": 2, 00:19:40.448 "num_base_bdevs_discovered": 1, 00:19:40.448 "num_base_bdevs_operational": 1, 00:19:40.448 "base_bdevs_list": [ 00:19:40.448 { 00:19:40.448 "name": null, 00:19:40.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.448 "is_configured": false, 00:19:40.448 "data_offset": 0, 00:19:40.448 "data_size": 7936 00:19:40.448 }, 00:19:40.448 { 00:19:40.448 "name": "BaseBdev2", 00:19:40.448 "uuid": "01c70669-b49f-508f-9eab-6fcd4a883955", 00:19:40.448 "is_configured": true, 00:19:40.448 "data_offset": 256, 00:19:40.448 "data_size": 7936 00:19:40.448 } 00:19:40.448 ] 00:19:40.448 }' 00:19:40.448 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.448 04:36:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.017 04:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:41.017 04:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:41.017 04:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:41.017 04:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:41.017 04:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:41.017 04:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.017 04:36:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.017 04:36:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.017 04:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.017 04:36:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.017 04:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:41.017 "name": "raid_bdev1", 00:19:41.017 "uuid": "a20cabcc-b38b-49f9-b9d6-33ce591999ab", 00:19:41.017 "strip_size_kb": 0, 00:19:41.017 "state": "online", 00:19:41.017 "raid_level": "raid1", 00:19:41.017 "superblock": true, 00:19:41.017 "num_base_bdevs": 2, 00:19:41.017 "num_base_bdevs_discovered": 1, 00:19:41.017 "num_base_bdevs_operational": 1, 00:19:41.017 "base_bdevs_list": [ 00:19:41.017 { 00:19:41.017 "name": null, 00:19:41.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.017 "is_configured": false, 00:19:41.017 "data_offset": 0, 00:19:41.017 "data_size": 7936 00:19:41.017 }, 00:19:41.017 { 00:19:41.017 "name": "BaseBdev2", 00:19:41.017 "uuid": "01c70669-b49f-508f-9eab-6fcd4a883955", 00:19:41.017 "is_configured": true, 00:19:41.017 "data_offset": 256, 00:19:41.017 "data_size": 7936 00:19:41.017 } 00:19:41.017 ] 00:19:41.017 }' 00:19:41.017 04:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:41.017 04:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:41.017 04:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:41.017 04:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:41.017 04:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:41.017 04:36:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.017 04:36:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.017 [2024-11-27 04:36:37.456422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:41.017 [2024-11-27 04:36:37.474564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:41.017 04:36:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.017 04:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:41.017 [2024-11-27 04:36:37.476607] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:41.961 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:41.961 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:41.961 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:41.961 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:41.961 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:41.961 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.961 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.961 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.961 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.961 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.961 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:41.961 "name": "raid_bdev1", 00:19:41.961 "uuid": "a20cabcc-b38b-49f9-b9d6-33ce591999ab", 00:19:41.961 "strip_size_kb": 0, 00:19:41.961 "state": "online", 00:19:41.961 "raid_level": "raid1", 00:19:41.961 "superblock": true, 00:19:41.961 "num_base_bdevs": 2, 00:19:41.961 "num_base_bdevs_discovered": 2, 00:19:41.961 "num_base_bdevs_operational": 2, 00:19:41.961 "process": { 00:19:41.961 "type": "rebuild", 00:19:41.961 "target": "spare", 00:19:41.961 "progress": { 00:19:41.961 "blocks": 2560, 00:19:41.961 "percent": 32 00:19:41.961 } 00:19:41.961 }, 00:19:41.961 "base_bdevs_list": [ 00:19:41.961 { 00:19:41.961 "name": "spare", 00:19:41.961 "uuid": "faf33a5b-75fc-5081-9399-b164d55def55", 00:19:41.961 "is_configured": true, 00:19:41.961 "data_offset": 256, 00:19:41.961 "data_size": 7936 00:19:41.961 }, 00:19:41.961 { 00:19:41.961 "name": "BaseBdev2", 00:19:41.961 "uuid": "01c70669-b49f-508f-9eab-6fcd4a883955", 00:19:41.961 "is_configured": true, 00:19:41.961 "data_offset": 256, 00:19:41.961 "data_size": 7936 00:19:41.961 } 00:19:41.961 ] 00:19:41.961 }' 00:19:41.961 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:42.228 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:42.228 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:42.228 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:42.228 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:42.228 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:42.228 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:42.228 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:42.228 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:42.228 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:42.228 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=706 00:19:42.228 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:42.228 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:42.228 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:42.228 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:42.228 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:42.228 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:42.228 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.228 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.228 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.228 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:42.228 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.228 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:42.228 "name": "raid_bdev1", 00:19:42.228 "uuid": "a20cabcc-b38b-49f9-b9d6-33ce591999ab", 00:19:42.228 "strip_size_kb": 0, 00:19:42.228 "state": "online", 00:19:42.228 "raid_level": "raid1", 00:19:42.228 "superblock": true, 00:19:42.228 "num_base_bdevs": 2, 00:19:42.228 "num_base_bdevs_discovered": 2, 00:19:42.228 "num_base_bdevs_operational": 2, 00:19:42.228 "process": { 00:19:42.228 "type": "rebuild", 00:19:42.228 "target": "spare", 00:19:42.228 "progress": { 00:19:42.228 "blocks": 2816, 00:19:42.228 "percent": 35 00:19:42.228 } 00:19:42.228 }, 00:19:42.228 "base_bdevs_list": [ 00:19:42.228 { 00:19:42.228 "name": "spare", 00:19:42.228 "uuid": "faf33a5b-75fc-5081-9399-b164d55def55", 00:19:42.228 "is_configured": true, 00:19:42.228 "data_offset": 256, 00:19:42.228 "data_size": 7936 00:19:42.228 }, 00:19:42.228 { 00:19:42.228 "name": "BaseBdev2", 00:19:42.228 "uuid": "01c70669-b49f-508f-9eab-6fcd4a883955", 00:19:42.228 "is_configured": true, 00:19:42.228 "data_offset": 256, 00:19:42.228 "data_size": 7936 00:19:42.228 } 00:19:42.228 ] 00:19:42.228 }' 00:19:42.228 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:42.228 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:42.228 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:42.228 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:42.228 04:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:43.605 04:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:43.605 04:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:43.605 04:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:43.605 04:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:43.605 04:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:43.605 04:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:43.605 04:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.605 04:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.605 04:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:43.605 04:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.605 04:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.605 04:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:43.605 "name": "raid_bdev1", 00:19:43.605 "uuid": "a20cabcc-b38b-49f9-b9d6-33ce591999ab", 00:19:43.605 "strip_size_kb": 0, 00:19:43.605 "state": "online", 00:19:43.605 "raid_level": "raid1", 00:19:43.605 "superblock": true, 00:19:43.605 "num_base_bdevs": 2, 00:19:43.605 "num_base_bdevs_discovered": 2, 00:19:43.605 "num_base_bdevs_operational": 2, 00:19:43.605 "process": { 00:19:43.605 "type": "rebuild", 00:19:43.605 "target": "spare", 00:19:43.605 "progress": { 00:19:43.605 "blocks": 5632, 00:19:43.605 "percent": 70 00:19:43.605 } 00:19:43.605 }, 00:19:43.605 "base_bdevs_list": [ 00:19:43.605 { 00:19:43.605 "name": "spare", 00:19:43.605 "uuid": "faf33a5b-75fc-5081-9399-b164d55def55", 00:19:43.605 "is_configured": true, 00:19:43.605 "data_offset": 256, 00:19:43.605 "data_size": 7936 00:19:43.605 }, 00:19:43.605 { 00:19:43.605 "name": "BaseBdev2", 00:19:43.605 "uuid": "01c70669-b49f-508f-9eab-6fcd4a883955", 00:19:43.605 "is_configured": true, 00:19:43.605 "data_offset": 256, 00:19:43.605 "data_size": 7936 00:19:43.605 } 00:19:43.605 ] 00:19:43.605 }' 00:19:43.605 04:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:43.605 04:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:43.605 04:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:43.605 04:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:43.605 04:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:44.194 [2024-11-27 04:36:40.591006] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:44.194 [2024-11-27 04:36:40.591115] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:44.194 [2024-11-27 04:36:40.591252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:44.454 04:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:44.454 04:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:44.454 04:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:44.454 04:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:44.454 04:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:44.454 04:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:44.454 04:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.454 04:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.454 04:36:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.454 04:36:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:44.454 04:36:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.454 04:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:44.454 "name": "raid_bdev1", 00:19:44.454 "uuid": "a20cabcc-b38b-49f9-b9d6-33ce591999ab", 00:19:44.454 "strip_size_kb": 0, 00:19:44.454 "state": "online", 00:19:44.454 "raid_level": "raid1", 00:19:44.454 "superblock": true, 00:19:44.454 "num_base_bdevs": 2, 00:19:44.454 "num_base_bdevs_discovered": 2, 00:19:44.454 "num_base_bdevs_operational": 2, 00:19:44.454 "base_bdevs_list": [ 00:19:44.454 { 00:19:44.454 "name": "spare", 00:19:44.454 "uuid": "faf33a5b-75fc-5081-9399-b164d55def55", 00:19:44.454 "is_configured": true, 00:19:44.454 "data_offset": 256, 00:19:44.454 "data_size": 7936 00:19:44.455 }, 00:19:44.455 { 00:19:44.455 "name": "BaseBdev2", 00:19:44.455 "uuid": "01c70669-b49f-508f-9eab-6fcd4a883955", 00:19:44.455 "is_configured": true, 00:19:44.455 "data_offset": 256, 00:19:44.455 "data_size": 7936 00:19:44.455 } 00:19:44.455 ] 00:19:44.455 }' 00:19:44.455 04:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:44.455 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:44.455 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:44.716 "name": "raid_bdev1", 00:19:44.716 "uuid": "a20cabcc-b38b-49f9-b9d6-33ce591999ab", 00:19:44.716 "strip_size_kb": 0, 00:19:44.716 "state": "online", 00:19:44.716 "raid_level": "raid1", 00:19:44.716 "superblock": true, 00:19:44.716 "num_base_bdevs": 2, 00:19:44.716 "num_base_bdevs_discovered": 2, 00:19:44.716 "num_base_bdevs_operational": 2, 00:19:44.716 "base_bdevs_list": [ 00:19:44.716 { 00:19:44.716 "name": "spare", 00:19:44.716 "uuid": "faf33a5b-75fc-5081-9399-b164d55def55", 00:19:44.716 "is_configured": true, 00:19:44.716 "data_offset": 256, 00:19:44.716 "data_size": 7936 00:19:44.716 }, 00:19:44.716 { 00:19:44.716 "name": "BaseBdev2", 00:19:44.716 "uuid": "01c70669-b49f-508f-9eab-6fcd4a883955", 00:19:44.716 "is_configured": true, 00:19:44.716 "data_offset": 256, 00:19:44.716 "data_size": 7936 00:19:44.716 } 00:19:44.716 ] 00:19:44.716 }' 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.716 "name": "raid_bdev1", 00:19:44.716 "uuid": "a20cabcc-b38b-49f9-b9d6-33ce591999ab", 00:19:44.716 "strip_size_kb": 0, 00:19:44.716 "state": "online", 00:19:44.716 "raid_level": "raid1", 00:19:44.716 "superblock": true, 00:19:44.716 "num_base_bdevs": 2, 00:19:44.716 "num_base_bdevs_discovered": 2, 00:19:44.716 "num_base_bdevs_operational": 2, 00:19:44.716 "base_bdevs_list": [ 00:19:44.716 { 00:19:44.716 "name": "spare", 00:19:44.716 "uuid": "faf33a5b-75fc-5081-9399-b164d55def55", 00:19:44.716 "is_configured": true, 00:19:44.716 "data_offset": 256, 00:19:44.716 "data_size": 7936 00:19:44.716 }, 00:19:44.716 { 00:19:44.716 "name": "BaseBdev2", 00:19:44.716 "uuid": "01c70669-b49f-508f-9eab-6fcd4a883955", 00:19:44.716 "is_configured": true, 00:19:44.716 "data_offset": 256, 00:19:44.716 "data_size": 7936 00:19:44.716 } 00:19:44.716 ] 00:19:44.716 }' 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.716 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:45.286 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:45.286 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.286 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:45.286 [2024-11-27 04:36:41.640892] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:45.286 [2024-11-27 04:36:41.640940] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:45.286 [2024-11-27 04:36:41.641045] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:45.286 [2024-11-27 04:36:41.641133] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:45.286 [2024-11-27 04:36:41.641149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:45.286 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.286 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.286 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.286 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:45.286 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:19:45.286 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.286 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:45.286 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:45.286 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:45.286 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:45.286 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:45.286 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:45.286 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:45.286 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:45.286 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:45.286 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:45.286 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:45.286 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:45.286 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:45.546 /dev/nbd0 00:19:45.546 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:45.546 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:45.546 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:45.546 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:45.546 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:45.546 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:45.546 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:45.546 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:45.546 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:45.546 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:45.546 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:45.546 1+0 records in 00:19:45.546 1+0 records out 00:19:45.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238411 s, 17.2 MB/s 00:19:45.546 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:45.546 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:45.546 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:45.546 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:45.546 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:45.546 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:45.546 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:45.546 04:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:45.821 /dev/nbd1 00:19:45.821 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:45.821 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:45.821 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:45.821 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:45.821 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:45.821 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:45.821 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:45.821 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:45.821 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:45.821 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:45.821 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:45.821 1+0 records in 00:19:45.821 1+0 records out 00:19:45.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451909 s, 9.1 MB/s 00:19:45.821 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:45.821 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:45.821 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:45.821 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:45.821 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:45.821 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:45.821 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:45.821 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:46.081 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:46.081 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:46.081 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:46.081 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:46.081 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:46.081 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:46.081 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:46.339 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:46.339 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:46.339 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:46.339 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:46.339 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:46.339 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:46.339 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:46.339 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:46.339 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:46.339 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:46.339 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:46.339 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:46.340 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:46.340 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:46.340 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:46.340 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:46.340 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:46.340 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:46.340 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:46.340 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:46.340 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.340 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:46.599 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.599 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:46.599 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.599 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:46.599 [2024-11-27 04:36:42.937279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:46.599 [2024-11-27 04:36:42.937380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.599 [2024-11-27 04:36:42.937408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:46.599 [2024-11-27 04:36:42.937418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.599 [2024-11-27 04:36:42.939841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.599 [2024-11-27 04:36:42.939887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:46.599 [2024-11-27 04:36:42.939996] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:46.599 [2024-11-27 04:36:42.940054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:46.599 [2024-11-27 04:36:42.940226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:46.599 spare 00:19:46.599 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.599 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:46.599 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.599 04:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:46.599 [2024-11-27 04:36:43.040157] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:46.599 [2024-11-27 04:36:43.040228] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:46.599 [2024-11-27 04:36:43.040616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:46.599 [2024-11-27 04:36:43.040871] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:46.599 [2024-11-27 04:36:43.040892] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:46.599 [2024-11-27 04:36:43.041151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.599 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.599 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:46.599 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:46.599 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:46.599 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:46.599 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:46.599 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:46.599 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:46.599 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:46.599 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:46.599 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:46.599 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.599 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.599 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.599 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:46.599 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.599 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.599 "name": "raid_bdev1", 00:19:46.599 "uuid": "a20cabcc-b38b-49f9-b9d6-33ce591999ab", 00:19:46.599 "strip_size_kb": 0, 00:19:46.599 "state": "online", 00:19:46.599 "raid_level": "raid1", 00:19:46.599 "superblock": true, 00:19:46.599 "num_base_bdevs": 2, 00:19:46.599 "num_base_bdevs_discovered": 2, 00:19:46.599 "num_base_bdevs_operational": 2, 00:19:46.599 "base_bdevs_list": [ 00:19:46.599 { 00:19:46.599 "name": "spare", 00:19:46.599 "uuid": "faf33a5b-75fc-5081-9399-b164d55def55", 00:19:46.599 "is_configured": true, 00:19:46.599 "data_offset": 256, 00:19:46.599 "data_size": 7936 00:19:46.599 }, 00:19:46.599 { 00:19:46.599 "name": "BaseBdev2", 00:19:46.599 "uuid": "01c70669-b49f-508f-9eab-6fcd4a883955", 00:19:46.599 "is_configured": true, 00:19:46.599 "data_offset": 256, 00:19:46.599 "data_size": 7936 00:19:46.599 } 00:19:46.599 ] 00:19:46.599 }' 00:19:46.599 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.600 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.168 "name": "raid_bdev1", 00:19:47.168 "uuid": "a20cabcc-b38b-49f9-b9d6-33ce591999ab", 00:19:47.168 "strip_size_kb": 0, 00:19:47.168 "state": "online", 00:19:47.168 "raid_level": "raid1", 00:19:47.168 "superblock": true, 00:19:47.168 "num_base_bdevs": 2, 00:19:47.168 "num_base_bdevs_discovered": 2, 00:19:47.168 "num_base_bdevs_operational": 2, 00:19:47.168 "base_bdevs_list": [ 00:19:47.168 { 00:19:47.168 "name": "spare", 00:19:47.168 "uuid": "faf33a5b-75fc-5081-9399-b164d55def55", 00:19:47.168 "is_configured": true, 00:19:47.168 "data_offset": 256, 00:19:47.168 "data_size": 7936 00:19:47.168 }, 00:19:47.168 { 00:19:47.168 "name": "BaseBdev2", 00:19:47.168 "uuid": "01c70669-b49f-508f-9eab-6fcd4a883955", 00:19:47.168 "is_configured": true, 00:19:47.168 "data_offset": 256, 00:19:47.168 "data_size": 7936 00:19:47.168 } 00:19:47.168 ] 00:19:47.168 }' 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:47.168 [2024-11-27 04:36:43.684145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.168 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.168 "name": "raid_bdev1", 00:19:47.168 "uuid": "a20cabcc-b38b-49f9-b9d6-33ce591999ab", 00:19:47.169 "strip_size_kb": 0, 00:19:47.169 "state": "online", 00:19:47.169 "raid_level": "raid1", 00:19:47.169 "superblock": true, 00:19:47.169 "num_base_bdevs": 2, 00:19:47.169 "num_base_bdevs_discovered": 1, 00:19:47.169 "num_base_bdevs_operational": 1, 00:19:47.169 "base_bdevs_list": [ 00:19:47.169 { 00:19:47.169 "name": null, 00:19:47.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.169 "is_configured": false, 00:19:47.169 "data_offset": 0, 00:19:47.169 "data_size": 7936 00:19:47.169 }, 00:19:47.169 { 00:19:47.169 "name": "BaseBdev2", 00:19:47.169 "uuid": "01c70669-b49f-508f-9eab-6fcd4a883955", 00:19:47.169 "is_configured": true, 00:19:47.169 "data_offset": 256, 00:19:47.169 "data_size": 7936 00:19:47.169 } 00:19:47.169 ] 00:19:47.169 }' 00:19:47.169 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.169 04:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:47.736 04:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:47.736 04:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.736 04:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:47.736 [2024-11-27 04:36:44.151379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:47.736 [2024-11-27 04:36:44.151605] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:47.736 [2024-11-27 04:36:44.151622] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:47.736 [2024-11-27 04:36:44.151659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:47.736 [2024-11-27 04:36:44.168531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:47.736 04:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.736 04:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:47.736 [2024-11-27 04:36:44.170428] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:48.674 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:48.674 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:48.674 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:48.674 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:48.674 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:48.674 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.674 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.674 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.674 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:48.674 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.674 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:48.674 "name": "raid_bdev1", 00:19:48.674 "uuid": "a20cabcc-b38b-49f9-b9d6-33ce591999ab", 00:19:48.674 "strip_size_kb": 0, 00:19:48.674 "state": "online", 00:19:48.674 "raid_level": "raid1", 00:19:48.674 "superblock": true, 00:19:48.674 "num_base_bdevs": 2, 00:19:48.674 "num_base_bdevs_discovered": 2, 00:19:48.674 "num_base_bdevs_operational": 2, 00:19:48.674 "process": { 00:19:48.674 "type": "rebuild", 00:19:48.674 "target": "spare", 00:19:48.674 "progress": { 00:19:48.674 "blocks": 2560, 00:19:48.674 "percent": 32 00:19:48.674 } 00:19:48.674 }, 00:19:48.674 "base_bdevs_list": [ 00:19:48.674 { 00:19:48.674 "name": "spare", 00:19:48.674 "uuid": "faf33a5b-75fc-5081-9399-b164d55def55", 00:19:48.674 "is_configured": true, 00:19:48.674 "data_offset": 256, 00:19:48.674 "data_size": 7936 00:19:48.674 }, 00:19:48.674 { 00:19:48.674 "name": "BaseBdev2", 00:19:48.674 "uuid": "01c70669-b49f-508f-9eab-6fcd4a883955", 00:19:48.674 "is_configured": true, 00:19:48.674 "data_offset": 256, 00:19:48.674 "data_size": 7936 00:19:48.674 } 00:19:48.674 ] 00:19:48.674 }' 00:19:48.674 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:48.935 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:48.935 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:48.935 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:48.935 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:48.935 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.935 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:48.935 [2024-11-27 04:36:45.333956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:48.935 [2024-11-27 04:36:45.376249] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:48.935 [2024-11-27 04:36:45.376364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.935 [2024-11-27 04:36:45.376380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:48.935 [2024-11-27 04:36:45.376390] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:48.935 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.935 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:48.935 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:48.935 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:48.935 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:48.935 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:48.935 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:48.935 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.935 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.935 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.935 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.935 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.935 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.935 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:48.935 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.935 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.935 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.935 "name": "raid_bdev1", 00:19:48.935 "uuid": "a20cabcc-b38b-49f9-b9d6-33ce591999ab", 00:19:48.935 "strip_size_kb": 0, 00:19:48.935 "state": "online", 00:19:48.935 "raid_level": "raid1", 00:19:48.935 "superblock": true, 00:19:48.935 "num_base_bdevs": 2, 00:19:48.935 "num_base_bdevs_discovered": 1, 00:19:48.935 "num_base_bdevs_operational": 1, 00:19:48.935 "base_bdevs_list": [ 00:19:48.935 { 00:19:48.935 "name": null, 00:19:48.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.935 "is_configured": false, 00:19:48.935 "data_offset": 0, 00:19:48.935 "data_size": 7936 00:19:48.935 }, 00:19:48.935 { 00:19:48.935 "name": "BaseBdev2", 00:19:48.935 "uuid": "01c70669-b49f-508f-9eab-6fcd4a883955", 00:19:48.935 "is_configured": true, 00:19:48.935 "data_offset": 256, 00:19:48.935 "data_size": 7936 00:19:48.935 } 00:19:48.935 ] 00:19:48.935 }' 00:19:48.935 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.935 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:49.504 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:49.504 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.504 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:49.504 [2024-11-27 04:36:45.871661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:49.504 [2024-11-27 04:36:45.871736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:49.504 [2024-11-27 04:36:45.871763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:49.504 [2024-11-27 04:36:45.871777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:49.504 [2024-11-27 04:36:45.872256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:49.504 [2024-11-27 04:36:45.872285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:49.504 [2024-11-27 04:36:45.872381] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:49.504 [2024-11-27 04:36:45.872402] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:49.504 [2024-11-27 04:36:45.872415] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:49.504 [2024-11-27 04:36:45.872438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:49.504 [2024-11-27 04:36:45.889145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:49.504 spare 00:19:49.504 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.504 04:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:49.504 [2024-11-27 04:36:45.891017] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:50.441 04:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:50.441 04:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:50.441 04:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:50.441 04:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:50.441 04:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:50.441 04:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.441 04:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.441 04:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.441 04:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:50.441 04:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.441 04:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:50.441 "name": "raid_bdev1", 00:19:50.441 "uuid": "a20cabcc-b38b-49f9-b9d6-33ce591999ab", 00:19:50.441 "strip_size_kb": 0, 00:19:50.441 "state": "online", 00:19:50.441 "raid_level": "raid1", 00:19:50.441 "superblock": true, 00:19:50.441 "num_base_bdevs": 2, 00:19:50.441 "num_base_bdevs_discovered": 2, 00:19:50.441 "num_base_bdevs_operational": 2, 00:19:50.441 "process": { 00:19:50.441 "type": "rebuild", 00:19:50.441 "target": "spare", 00:19:50.441 "progress": { 00:19:50.441 "blocks": 2560, 00:19:50.441 "percent": 32 00:19:50.441 } 00:19:50.441 }, 00:19:50.441 "base_bdevs_list": [ 00:19:50.441 { 00:19:50.441 "name": "spare", 00:19:50.442 "uuid": "faf33a5b-75fc-5081-9399-b164d55def55", 00:19:50.442 "is_configured": true, 00:19:50.442 "data_offset": 256, 00:19:50.442 "data_size": 7936 00:19:50.442 }, 00:19:50.442 { 00:19:50.442 "name": "BaseBdev2", 00:19:50.442 "uuid": "01c70669-b49f-508f-9eab-6fcd4a883955", 00:19:50.442 "is_configured": true, 00:19:50.442 "data_offset": 256, 00:19:50.442 "data_size": 7936 00:19:50.442 } 00:19:50.442 ] 00:19:50.442 }' 00:19:50.442 04:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:50.442 04:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:50.442 04:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:50.701 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:50.701 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:50.701 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.701 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:50.701 [2024-11-27 04:36:47.038427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:50.701 [2024-11-27 04:36:47.096788] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:50.701 [2024-11-27 04:36:47.096886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:50.701 [2024-11-27 04:36:47.096905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:50.701 [2024-11-27 04:36:47.096912] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:50.701 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.701 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:50.701 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:50.701 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:50.701 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:50.701 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:50.701 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:50.701 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.701 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.701 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.701 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.701 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.701 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.701 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.701 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:50.701 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.701 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.701 "name": "raid_bdev1", 00:19:50.701 "uuid": "a20cabcc-b38b-49f9-b9d6-33ce591999ab", 00:19:50.701 "strip_size_kb": 0, 00:19:50.701 "state": "online", 00:19:50.701 "raid_level": "raid1", 00:19:50.701 "superblock": true, 00:19:50.701 "num_base_bdevs": 2, 00:19:50.701 "num_base_bdevs_discovered": 1, 00:19:50.701 "num_base_bdevs_operational": 1, 00:19:50.701 "base_bdevs_list": [ 00:19:50.701 { 00:19:50.701 "name": null, 00:19:50.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.701 "is_configured": false, 00:19:50.701 "data_offset": 0, 00:19:50.701 "data_size": 7936 00:19:50.701 }, 00:19:50.701 { 00:19:50.701 "name": "BaseBdev2", 00:19:50.701 "uuid": "01c70669-b49f-508f-9eab-6fcd4a883955", 00:19:50.701 "is_configured": true, 00:19:50.701 "data_offset": 256, 00:19:50.701 "data_size": 7936 00:19:50.701 } 00:19:50.701 ] 00:19:50.701 }' 00:19:50.701 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.701 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:51.269 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:51.269 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:51.269 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:51.269 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:51.269 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:51.269 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.269 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.269 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.269 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:51.269 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.269 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:51.269 "name": "raid_bdev1", 00:19:51.269 "uuid": "a20cabcc-b38b-49f9-b9d6-33ce591999ab", 00:19:51.269 "strip_size_kb": 0, 00:19:51.269 "state": "online", 00:19:51.269 "raid_level": "raid1", 00:19:51.269 "superblock": true, 00:19:51.269 "num_base_bdevs": 2, 00:19:51.269 "num_base_bdevs_discovered": 1, 00:19:51.269 "num_base_bdevs_operational": 1, 00:19:51.269 "base_bdevs_list": [ 00:19:51.269 { 00:19:51.269 "name": null, 00:19:51.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.269 "is_configured": false, 00:19:51.269 "data_offset": 0, 00:19:51.269 "data_size": 7936 00:19:51.269 }, 00:19:51.269 { 00:19:51.269 "name": "BaseBdev2", 00:19:51.269 "uuid": "01c70669-b49f-508f-9eab-6fcd4a883955", 00:19:51.269 "is_configured": true, 00:19:51.269 "data_offset": 256, 00:19:51.269 "data_size": 7936 00:19:51.269 } 00:19:51.269 ] 00:19:51.269 }' 00:19:51.269 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:51.269 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:51.269 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:51.269 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:51.269 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:51.269 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.270 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:51.270 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.270 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:51.270 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.270 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:51.270 [2024-11-27 04:36:47.777399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:51.270 [2024-11-27 04:36:47.777466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:51.270 [2024-11-27 04:36:47.777497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:51.270 [2024-11-27 04:36:47.777519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:51.270 [2024-11-27 04:36:47.778000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:51.270 [2024-11-27 04:36:47.778024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:51.270 [2024-11-27 04:36:47.778128] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:51.270 [2024-11-27 04:36:47.778147] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:51.270 [2024-11-27 04:36:47.778158] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:51.270 [2024-11-27 04:36:47.778169] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:51.270 BaseBdev1 00:19:51.270 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.270 04:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:52.208 04:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:52.208 04:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:52.208 04:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:52.208 04:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:52.208 04:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:52.208 04:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:52.208 04:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.208 04:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.208 04:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.208 04:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.208 04:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.469 04:36:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.469 04:36:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.469 04:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.469 04:36:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.469 04:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.469 "name": "raid_bdev1", 00:19:52.469 "uuid": "a20cabcc-b38b-49f9-b9d6-33ce591999ab", 00:19:52.469 "strip_size_kb": 0, 00:19:52.469 "state": "online", 00:19:52.469 "raid_level": "raid1", 00:19:52.469 "superblock": true, 00:19:52.469 "num_base_bdevs": 2, 00:19:52.469 "num_base_bdevs_discovered": 1, 00:19:52.469 "num_base_bdevs_operational": 1, 00:19:52.469 "base_bdevs_list": [ 00:19:52.469 { 00:19:52.469 "name": null, 00:19:52.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.469 "is_configured": false, 00:19:52.469 "data_offset": 0, 00:19:52.469 "data_size": 7936 00:19:52.469 }, 00:19:52.469 { 00:19:52.469 "name": "BaseBdev2", 00:19:52.469 "uuid": "01c70669-b49f-508f-9eab-6fcd4a883955", 00:19:52.469 "is_configured": true, 00:19:52.469 "data_offset": 256, 00:19:52.469 "data_size": 7936 00:19:52.469 } 00:19:52.469 ] 00:19:52.469 }' 00:19:52.469 04:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.469 04:36:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.729 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:52.729 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:52.729 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:52.729 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:52.729 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:52.729 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.729 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.729 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.729 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.729 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.729 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:52.729 "name": "raid_bdev1", 00:19:52.729 "uuid": "a20cabcc-b38b-49f9-b9d6-33ce591999ab", 00:19:52.729 "strip_size_kb": 0, 00:19:52.729 "state": "online", 00:19:52.729 "raid_level": "raid1", 00:19:52.729 "superblock": true, 00:19:52.729 "num_base_bdevs": 2, 00:19:52.729 "num_base_bdevs_discovered": 1, 00:19:52.729 "num_base_bdevs_operational": 1, 00:19:52.729 "base_bdevs_list": [ 00:19:52.729 { 00:19:52.729 "name": null, 00:19:52.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.729 "is_configured": false, 00:19:52.729 "data_offset": 0, 00:19:52.729 "data_size": 7936 00:19:52.729 }, 00:19:52.729 { 00:19:52.729 "name": "BaseBdev2", 00:19:52.729 "uuid": "01c70669-b49f-508f-9eab-6fcd4a883955", 00:19:52.729 "is_configured": true, 00:19:52.729 "data_offset": 256, 00:19:52.729 "data_size": 7936 00:19:52.729 } 00:19:52.729 ] 00:19:52.729 }' 00:19:52.729 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:52.729 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:52.729 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:52.989 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:52.989 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:52.989 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:19:52.989 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:52.989 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:52.989 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:52.989 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:52.989 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:52.989 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:52.989 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.989 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.989 [2024-11-27 04:36:49.358848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:52.989 [2024-11-27 04:36:49.359036] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:52.989 [2024-11-27 04:36:49.359055] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:52.989 request: 00:19:52.989 { 00:19:52.989 "base_bdev": "BaseBdev1", 00:19:52.989 "raid_bdev": "raid_bdev1", 00:19:52.989 "method": "bdev_raid_add_base_bdev", 00:19:52.989 "req_id": 1 00:19:52.989 } 00:19:52.989 Got JSON-RPC error response 00:19:52.989 response: 00:19:52.989 { 00:19:52.989 "code": -22, 00:19:52.989 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:52.989 } 00:19:52.989 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:52.989 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:19:52.989 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:52.989 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:52.989 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:52.989 04:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:53.926 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:53.926 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:53.926 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.926 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:53.926 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:53.926 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:53.926 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.926 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.926 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.926 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.926 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.926 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.926 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.926 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:53.926 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.926 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.926 "name": "raid_bdev1", 00:19:53.926 "uuid": "a20cabcc-b38b-49f9-b9d6-33ce591999ab", 00:19:53.926 "strip_size_kb": 0, 00:19:53.926 "state": "online", 00:19:53.926 "raid_level": "raid1", 00:19:53.926 "superblock": true, 00:19:53.926 "num_base_bdevs": 2, 00:19:53.926 "num_base_bdevs_discovered": 1, 00:19:53.926 "num_base_bdevs_operational": 1, 00:19:53.926 "base_bdevs_list": [ 00:19:53.926 { 00:19:53.926 "name": null, 00:19:53.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.926 "is_configured": false, 00:19:53.926 "data_offset": 0, 00:19:53.926 "data_size": 7936 00:19:53.926 }, 00:19:53.926 { 00:19:53.926 "name": "BaseBdev2", 00:19:53.926 "uuid": "01c70669-b49f-508f-9eab-6fcd4a883955", 00:19:53.926 "is_configured": true, 00:19:53.926 "data_offset": 256, 00:19:53.926 "data_size": 7936 00:19:53.926 } 00:19:53.926 ] 00:19:53.926 }' 00:19:53.926 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.926 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:54.494 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:54.494 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:54.494 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:54.494 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:54.494 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:54.494 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.494 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.494 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.494 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:54.494 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.494 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:54.494 "name": "raid_bdev1", 00:19:54.494 "uuid": "a20cabcc-b38b-49f9-b9d6-33ce591999ab", 00:19:54.494 "strip_size_kb": 0, 00:19:54.494 "state": "online", 00:19:54.494 "raid_level": "raid1", 00:19:54.494 "superblock": true, 00:19:54.494 "num_base_bdevs": 2, 00:19:54.494 "num_base_bdevs_discovered": 1, 00:19:54.494 "num_base_bdevs_operational": 1, 00:19:54.494 "base_bdevs_list": [ 00:19:54.494 { 00:19:54.494 "name": null, 00:19:54.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.494 "is_configured": false, 00:19:54.494 "data_offset": 0, 00:19:54.494 "data_size": 7936 00:19:54.494 }, 00:19:54.494 { 00:19:54.494 "name": "BaseBdev2", 00:19:54.494 "uuid": "01c70669-b49f-508f-9eab-6fcd4a883955", 00:19:54.494 "is_configured": true, 00:19:54.494 "data_offset": 256, 00:19:54.494 "data_size": 7936 00:19:54.494 } 00:19:54.494 ] 00:19:54.494 }' 00:19:54.494 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:54.495 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:54.495 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:54.495 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:54.495 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86928 00:19:54.495 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86928 ']' 00:19:54.495 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86928 00:19:54.495 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:19:54.495 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:54.495 04:36:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86928 00:19:54.495 04:36:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:54.495 04:36:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:54.495 killing process with pid 86928 00:19:54.495 04:36:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86928' 00:19:54.495 04:36:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86928 00:19:54.495 Received shutdown signal, test time was about 60.000000 seconds 00:19:54.495 00:19:54.495 Latency(us) 00:19:54.495 [2024-11-27T04:36:51.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.495 [2024-11-27T04:36:51.082Z] =================================================================================================================== 00:19:54.495 [2024-11-27T04:36:51.082Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:54.495 [2024-11-27 04:36:51.014324] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:54.495 04:36:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86928 00:19:54.495 [2024-11-27 04:36:51.014459] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:54.495 [2024-11-27 04:36:51.014515] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:54.495 [2024-11-27 04:36:51.014532] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:55.063 [2024-11-27 04:36:51.339498] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:55.996 04:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:19:55.996 00:19:55.996 real 0m20.223s 00:19:55.996 user 0m26.530s 00:19:55.996 sys 0m2.631s 00:19:55.996 04:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:55.996 04:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:55.996 ************************************ 00:19:55.996 END TEST raid_rebuild_test_sb_4k 00:19:55.996 ************************************ 00:19:56.255 04:36:52 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:19:56.255 04:36:52 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:19:56.255 04:36:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:56.255 04:36:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:56.255 04:36:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:56.255 ************************************ 00:19:56.255 START TEST raid_state_function_test_sb_md_separate 00:19:56.255 ************************************ 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87620 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:56.255 Process raid pid: 87620 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87620' 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87620 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87620 ']' 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:56.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:56.255 04:36:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.255 [2024-11-27 04:36:52.707494] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:56.255 [2024-11-27 04:36:52.707635] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.514 [2024-11-27 04:36:52.879223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.514 [2024-11-27 04:36:53.002226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.773 [2024-11-27 04:36:53.214879] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:56.773 [2024-11-27 04:36:53.214926] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:57.032 04:36:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:57.032 04:36:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:57.032 04:36:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:57.032 04:36:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.032 04:36:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.032 [2024-11-27 04:36:53.571931] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:57.032 [2024-11-27 04:36:53.571991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:57.032 [2024-11-27 04:36:53.572004] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:57.032 [2024-11-27 04:36:53.572015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:57.032 04:36:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.032 04:36:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:57.032 04:36:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:57.032 04:36:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:57.032 04:36:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:57.032 04:36:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:57.032 04:36:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:57.032 04:36:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.032 04:36:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.032 04:36:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.032 04:36:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.032 04:36:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.032 04:36:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:57.032 04:36:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.032 04:36:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.032 04:36:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.291 04:36:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.291 "name": "Existed_Raid", 00:19:57.291 "uuid": "af810cf9-27a2-4441-a7da-ee40f2462ed3", 00:19:57.291 "strip_size_kb": 0, 00:19:57.291 "state": "configuring", 00:19:57.291 "raid_level": "raid1", 00:19:57.291 "superblock": true, 00:19:57.291 "num_base_bdevs": 2, 00:19:57.291 "num_base_bdevs_discovered": 0, 00:19:57.291 "num_base_bdevs_operational": 2, 00:19:57.291 "base_bdevs_list": [ 00:19:57.291 { 00:19:57.291 "name": "BaseBdev1", 00:19:57.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.291 "is_configured": false, 00:19:57.291 "data_offset": 0, 00:19:57.291 "data_size": 0 00:19:57.291 }, 00:19:57.291 { 00:19:57.291 "name": "BaseBdev2", 00:19:57.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.291 "is_configured": false, 00:19:57.291 "data_offset": 0, 00:19:57.291 "data_size": 0 00:19:57.291 } 00:19:57.291 ] 00:19:57.291 }' 00:19:57.291 04:36:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.291 04:36:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.551 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:57.551 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.552 [2024-11-27 04:36:54.007144] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:57.552 [2024-11-27 04:36:54.007240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.552 [2024-11-27 04:36:54.019130] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:57.552 [2024-11-27 04:36:54.019221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:57.552 [2024-11-27 04:36:54.019257] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:57.552 [2024-11-27 04:36:54.019302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.552 [2024-11-27 04:36:54.065400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:57.552 BaseBdev1 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.552 [ 00:19:57.552 { 00:19:57.552 "name": "BaseBdev1", 00:19:57.552 "aliases": [ 00:19:57.552 "5b8470d2-0275-485d-ad25-7b9391e269e8" 00:19:57.552 ], 00:19:57.552 "product_name": "Malloc disk", 00:19:57.552 "block_size": 4096, 00:19:57.552 "num_blocks": 8192, 00:19:57.552 "uuid": "5b8470d2-0275-485d-ad25-7b9391e269e8", 00:19:57.552 "md_size": 32, 00:19:57.552 "md_interleave": false, 00:19:57.552 "dif_type": 0, 00:19:57.552 "assigned_rate_limits": { 00:19:57.552 "rw_ios_per_sec": 0, 00:19:57.552 "rw_mbytes_per_sec": 0, 00:19:57.552 "r_mbytes_per_sec": 0, 00:19:57.552 "w_mbytes_per_sec": 0 00:19:57.552 }, 00:19:57.552 "claimed": true, 00:19:57.552 "claim_type": "exclusive_write", 00:19:57.552 "zoned": false, 00:19:57.552 "supported_io_types": { 00:19:57.552 "read": true, 00:19:57.552 "write": true, 00:19:57.552 "unmap": true, 00:19:57.552 "flush": true, 00:19:57.552 "reset": true, 00:19:57.552 "nvme_admin": false, 00:19:57.552 "nvme_io": false, 00:19:57.552 "nvme_io_md": false, 00:19:57.552 "write_zeroes": true, 00:19:57.552 "zcopy": true, 00:19:57.552 "get_zone_info": false, 00:19:57.552 "zone_management": false, 00:19:57.552 "zone_append": false, 00:19:57.552 "compare": false, 00:19:57.552 "compare_and_write": false, 00:19:57.552 "abort": true, 00:19:57.552 "seek_hole": false, 00:19:57.552 "seek_data": false, 00:19:57.552 "copy": true, 00:19:57.552 "nvme_iov_md": false 00:19:57.552 }, 00:19:57.552 "memory_domains": [ 00:19:57.552 { 00:19:57.552 "dma_device_id": "system", 00:19:57.552 "dma_device_type": 1 00:19:57.552 }, 00:19:57.552 { 00:19:57.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.552 "dma_device_type": 2 00:19:57.552 } 00:19:57.552 ], 00:19:57.552 "driver_specific": {} 00:19:57.552 } 00:19:57.552 ] 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:57.552 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.812 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.812 "name": "Existed_Raid", 00:19:57.812 "uuid": "f7825d94-01b5-4454-9ead-cde496c86631", 00:19:57.812 "strip_size_kb": 0, 00:19:57.812 "state": "configuring", 00:19:57.812 "raid_level": "raid1", 00:19:57.812 "superblock": true, 00:19:57.812 "num_base_bdevs": 2, 00:19:57.812 "num_base_bdevs_discovered": 1, 00:19:57.812 "num_base_bdevs_operational": 2, 00:19:57.812 "base_bdevs_list": [ 00:19:57.812 { 00:19:57.812 "name": "BaseBdev1", 00:19:57.812 "uuid": "5b8470d2-0275-485d-ad25-7b9391e269e8", 00:19:57.812 "is_configured": true, 00:19:57.812 "data_offset": 256, 00:19:57.812 "data_size": 7936 00:19:57.812 }, 00:19:57.812 { 00:19:57.812 "name": "BaseBdev2", 00:19:57.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.812 "is_configured": false, 00:19:57.812 "data_offset": 0, 00:19:57.812 "data_size": 0 00:19:57.812 } 00:19:57.812 ] 00:19:57.812 }' 00:19:57.812 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.812 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.072 [2024-11-27 04:36:54.568640] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:58.072 [2024-11-27 04:36:54.568707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.072 [2024-11-27 04:36:54.580644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:58.072 [2024-11-27 04:36:54.582555] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:58.072 [2024-11-27 04:36:54.582634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.072 "name": "Existed_Raid", 00:19:58.072 "uuid": "664b1659-e1be-4172-8cdc-94afffa0db98", 00:19:58.072 "strip_size_kb": 0, 00:19:58.072 "state": "configuring", 00:19:58.072 "raid_level": "raid1", 00:19:58.072 "superblock": true, 00:19:58.072 "num_base_bdevs": 2, 00:19:58.072 "num_base_bdevs_discovered": 1, 00:19:58.072 "num_base_bdevs_operational": 2, 00:19:58.072 "base_bdevs_list": [ 00:19:58.072 { 00:19:58.072 "name": "BaseBdev1", 00:19:58.072 "uuid": "5b8470d2-0275-485d-ad25-7b9391e269e8", 00:19:58.072 "is_configured": true, 00:19:58.072 "data_offset": 256, 00:19:58.072 "data_size": 7936 00:19:58.072 }, 00:19:58.072 { 00:19:58.072 "name": "BaseBdev2", 00:19:58.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.072 "is_configured": false, 00:19:58.072 "data_offset": 0, 00:19:58.072 "data_size": 0 00:19:58.072 } 00:19:58.072 ] 00:19:58.072 }' 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.072 04:36:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.641 [2024-11-27 04:36:55.056668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:58.641 [2024-11-27 04:36:55.056977] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:58.641 [2024-11-27 04:36:55.056998] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:58.641 [2024-11-27 04:36:55.057082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:58.641 [2024-11-27 04:36:55.057238] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:58.641 [2024-11-27 04:36:55.057251] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:58.641 [2024-11-27 04:36:55.057341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:58.641 BaseBdev2 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.641 [ 00:19:58.641 { 00:19:58.641 "name": "BaseBdev2", 00:19:58.641 "aliases": [ 00:19:58.641 "5f75c9f6-e107-417d-9e7c-c17d46b5b56d" 00:19:58.641 ], 00:19:58.641 "product_name": "Malloc disk", 00:19:58.641 "block_size": 4096, 00:19:58.641 "num_blocks": 8192, 00:19:58.641 "uuid": "5f75c9f6-e107-417d-9e7c-c17d46b5b56d", 00:19:58.641 "md_size": 32, 00:19:58.641 "md_interleave": false, 00:19:58.641 "dif_type": 0, 00:19:58.641 "assigned_rate_limits": { 00:19:58.641 "rw_ios_per_sec": 0, 00:19:58.641 "rw_mbytes_per_sec": 0, 00:19:58.641 "r_mbytes_per_sec": 0, 00:19:58.641 "w_mbytes_per_sec": 0 00:19:58.641 }, 00:19:58.641 "claimed": true, 00:19:58.641 "claim_type": "exclusive_write", 00:19:58.641 "zoned": false, 00:19:58.641 "supported_io_types": { 00:19:58.641 "read": true, 00:19:58.641 "write": true, 00:19:58.641 "unmap": true, 00:19:58.641 "flush": true, 00:19:58.641 "reset": true, 00:19:58.641 "nvme_admin": false, 00:19:58.641 "nvme_io": false, 00:19:58.641 "nvme_io_md": false, 00:19:58.641 "write_zeroes": true, 00:19:58.641 "zcopy": true, 00:19:58.641 "get_zone_info": false, 00:19:58.641 "zone_management": false, 00:19:58.641 "zone_append": false, 00:19:58.641 "compare": false, 00:19:58.641 "compare_and_write": false, 00:19:58.641 "abort": true, 00:19:58.641 "seek_hole": false, 00:19:58.641 "seek_data": false, 00:19:58.641 "copy": true, 00:19:58.641 "nvme_iov_md": false 00:19:58.641 }, 00:19:58.641 "memory_domains": [ 00:19:58.641 { 00:19:58.641 "dma_device_id": "system", 00:19:58.641 "dma_device_type": 1 00:19:58.641 }, 00:19:58.641 { 00:19:58.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.641 "dma_device_type": 2 00:19:58.641 } 00:19:58.641 ], 00:19:58.641 "driver_specific": {} 00:19:58.641 } 00:19:58.641 ] 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.641 "name": "Existed_Raid", 00:19:58.641 "uuid": "664b1659-e1be-4172-8cdc-94afffa0db98", 00:19:58.641 "strip_size_kb": 0, 00:19:58.641 "state": "online", 00:19:58.641 "raid_level": "raid1", 00:19:58.641 "superblock": true, 00:19:58.641 "num_base_bdevs": 2, 00:19:58.641 "num_base_bdevs_discovered": 2, 00:19:58.641 "num_base_bdevs_operational": 2, 00:19:58.641 "base_bdevs_list": [ 00:19:58.641 { 00:19:58.641 "name": "BaseBdev1", 00:19:58.641 "uuid": "5b8470d2-0275-485d-ad25-7b9391e269e8", 00:19:58.641 "is_configured": true, 00:19:58.641 "data_offset": 256, 00:19:58.641 "data_size": 7936 00:19:58.641 }, 00:19:58.641 { 00:19:58.641 "name": "BaseBdev2", 00:19:58.641 "uuid": "5f75c9f6-e107-417d-9e7c-c17d46b5b56d", 00:19:58.641 "is_configured": true, 00:19:58.641 "data_offset": 256, 00:19:58.641 "data_size": 7936 00:19:58.641 } 00:19:58.641 ] 00:19:58.641 }' 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.641 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:59.211 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:59.212 [2024-11-27 04:36:55.580201] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:59.212 "name": "Existed_Raid", 00:19:59.212 "aliases": [ 00:19:59.212 "664b1659-e1be-4172-8cdc-94afffa0db98" 00:19:59.212 ], 00:19:59.212 "product_name": "Raid Volume", 00:19:59.212 "block_size": 4096, 00:19:59.212 "num_blocks": 7936, 00:19:59.212 "uuid": "664b1659-e1be-4172-8cdc-94afffa0db98", 00:19:59.212 "md_size": 32, 00:19:59.212 "md_interleave": false, 00:19:59.212 "dif_type": 0, 00:19:59.212 "assigned_rate_limits": { 00:19:59.212 "rw_ios_per_sec": 0, 00:19:59.212 "rw_mbytes_per_sec": 0, 00:19:59.212 "r_mbytes_per_sec": 0, 00:19:59.212 "w_mbytes_per_sec": 0 00:19:59.212 }, 00:19:59.212 "claimed": false, 00:19:59.212 "zoned": false, 00:19:59.212 "supported_io_types": { 00:19:59.212 "read": true, 00:19:59.212 "write": true, 00:19:59.212 "unmap": false, 00:19:59.212 "flush": false, 00:19:59.212 "reset": true, 00:19:59.212 "nvme_admin": false, 00:19:59.212 "nvme_io": false, 00:19:59.212 "nvme_io_md": false, 00:19:59.212 "write_zeroes": true, 00:19:59.212 "zcopy": false, 00:19:59.212 "get_zone_info": false, 00:19:59.212 "zone_management": false, 00:19:59.212 "zone_append": false, 00:19:59.212 "compare": false, 00:19:59.212 "compare_and_write": false, 00:19:59.212 "abort": false, 00:19:59.212 "seek_hole": false, 00:19:59.212 "seek_data": false, 00:19:59.212 "copy": false, 00:19:59.212 "nvme_iov_md": false 00:19:59.212 }, 00:19:59.212 "memory_domains": [ 00:19:59.212 { 00:19:59.212 "dma_device_id": "system", 00:19:59.212 "dma_device_type": 1 00:19:59.212 }, 00:19:59.212 { 00:19:59.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:59.212 "dma_device_type": 2 00:19:59.212 }, 00:19:59.212 { 00:19:59.212 "dma_device_id": "system", 00:19:59.212 "dma_device_type": 1 00:19:59.212 }, 00:19:59.212 { 00:19:59.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:59.212 "dma_device_type": 2 00:19:59.212 } 00:19:59.212 ], 00:19:59.212 "driver_specific": { 00:19:59.212 "raid": { 00:19:59.212 "uuid": "664b1659-e1be-4172-8cdc-94afffa0db98", 00:19:59.212 "strip_size_kb": 0, 00:19:59.212 "state": "online", 00:19:59.212 "raid_level": "raid1", 00:19:59.212 "superblock": true, 00:19:59.212 "num_base_bdevs": 2, 00:19:59.212 "num_base_bdevs_discovered": 2, 00:19:59.212 "num_base_bdevs_operational": 2, 00:19:59.212 "base_bdevs_list": [ 00:19:59.212 { 00:19:59.212 "name": "BaseBdev1", 00:19:59.212 "uuid": "5b8470d2-0275-485d-ad25-7b9391e269e8", 00:19:59.212 "is_configured": true, 00:19:59.212 "data_offset": 256, 00:19:59.212 "data_size": 7936 00:19:59.212 }, 00:19:59.212 { 00:19:59.212 "name": "BaseBdev2", 00:19:59.212 "uuid": "5f75c9f6-e107-417d-9e7c-c17d46b5b56d", 00:19:59.212 "is_configured": true, 00:19:59.212 "data_offset": 256, 00:19:59.212 "data_size": 7936 00:19:59.212 } 00:19:59.212 ] 00:19:59.212 } 00:19:59.212 } 00:19:59.212 }' 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:59.212 BaseBdev2' 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.212 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:59.212 [2024-11-27 04:36:55.779683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:59.472 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.473 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:59.473 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:59.473 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:59.473 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:59.473 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:59.473 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:59.473 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:59.473 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:59.473 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:59.473 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:59.473 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:59.473 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.473 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.473 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.473 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.473 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.473 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.473 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.473 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:59.473 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.473 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.473 "name": "Existed_Raid", 00:19:59.473 "uuid": "664b1659-e1be-4172-8cdc-94afffa0db98", 00:19:59.473 "strip_size_kb": 0, 00:19:59.473 "state": "online", 00:19:59.473 "raid_level": "raid1", 00:19:59.473 "superblock": true, 00:19:59.473 "num_base_bdevs": 2, 00:19:59.473 "num_base_bdevs_discovered": 1, 00:19:59.473 "num_base_bdevs_operational": 1, 00:19:59.473 "base_bdevs_list": [ 00:19:59.473 { 00:19:59.473 "name": null, 00:19:59.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.473 "is_configured": false, 00:19:59.473 "data_offset": 0, 00:19:59.473 "data_size": 7936 00:19:59.473 }, 00:19:59.473 { 00:19:59.473 "name": "BaseBdev2", 00:19:59.473 "uuid": "5f75c9f6-e107-417d-9e7c-c17d46b5b56d", 00:19:59.473 "is_configured": true, 00:19:59.473 "data_offset": 256, 00:19:59.473 "data_size": 7936 00:19:59.473 } 00:19:59.473 ] 00:19:59.473 }' 00:19:59.473 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.473 04:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:00.042 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:00.042 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:00.042 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.042 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.042 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:00.042 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:00.042 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.042 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:00.042 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:00.042 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:00.042 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.042 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:00.042 [2024-11-27 04:36:56.402148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:00.042 [2024-11-27 04:36:56.402253] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:00.042 [2024-11-27 04:36:56.509787] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:00.042 [2024-11-27 04:36:56.509932] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:00.042 [2024-11-27 04:36:56.509953] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:00.042 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.042 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:00.042 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:00.042 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.042 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.042 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:00.042 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:00.042 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.042 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:00.042 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:00.042 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:00.042 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87620 00:20:00.043 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87620 ']' 00:20:00.043 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87620 00:20:00.043 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:20:00.043 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.043 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87620 00:20:00.043 killing process with pid 87620 00:20:00.043 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:00.043 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:00.043 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87620' 00:20:00.043 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87620 00:20:00.043 [2024-11-27 04:36:56.602155] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:00.043 04:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87620 00:20:00.043 [2024-11-27 04:36:56.621244] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:01.422 04:36:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:20:01.422 ************************************ 00:20:01.422 END TEST raid_state_function_test_sb_md_separate 00:20:01.422 ************************************ 00:20:01.422 00:20:01.422 real 0m5.179s 00:20:01.422 user 0m7.452s 00:20:01.422 sys 0m0.846s 00:20:01.422 04:36:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:01.422 04:36:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:01.422 04:36:57 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:20:01.422 04:36:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:01.422 04:36:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.422 04:36:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:01.422 ************************************ 00:20:01.422 START TEST raid_superblock_test_md_separate 00:20:01.422 ************************************ 00:20:01.422 04:36:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:20:01.422 04:36:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:01.422 04:36:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:01.422 04:36:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:01.422 04:36:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:01.422 04:36:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:01.422 04:36:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:01.422 04:36:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:01.422 04:36:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:01.422 04:36:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:01.422 04:36:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:01.422 04:36:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:01.422 04:36:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:01.422 04:36:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:01.422 04:36:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:01.422 04:36:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:01.422 04:36:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87867 00:20:01.422 04:36:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:01.422 04:36:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87867 00:20:01.422 04:36:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87867 ']' 00:20:01.422 04:36:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.422 04:36:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.422 04:36:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.422 04:36:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.422 04:36:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:01.422 [2024-11-27 04:36:57.942862] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:01.422 [2024-11-27 04:36:57.943057] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87867 ] 00:20:01.682 [2024-11-27 04:36:58.118466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.682 [2024-11-27 04:36:58.238177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.940 [2024-11-27 04:36:58.446406] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:01.940 [2024-11-27 04:36:58.446550] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:02.510 malloc1 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:02.510 [2024-11-27 04:36:58.864226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:02.510 [2024-11-27 04:36:58.864285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.510 [2024-11-27 04:36:58.864306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:02.510 [2024-11-27 04:36:58.864316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.510 [2024-11-27 04:36:58.866194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.510 [2024-11-27 04:36:58.866313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:02.510 pt1 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:02.510 malloc2 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:02.510 [2024-11-27 04:36:58.921743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:02.510 [2024-11-27 04:36:58.921860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.510 [2024-11-27 04:36:58.921925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:02.510 [2024-11-27 04:36:58.921966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.510 [2024-11-27 04:36:58.924077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.510 [2024-11-27 04:36:58.924177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:02.510 pt2 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:02.510 [2024-11-27 04:36:58.933752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:02.510 [2024-11-27 04:36:58.935581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:02.510 [2024-11-27 04:36:58.935856] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:02.510 [2024-11-27 04:36:58.935918] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:02.510 [2024-11-27 04:36:58.936038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:02.510 [2024-11-27 04:36:58.936262] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:02.510 [2024-11-27 04:36:58.936326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:02.510 [2024-11-27 04:36:58.936491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.510 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.511 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.511 04:36:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.511 04:36:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:02.511 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.511 04:36:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.511 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.511 "name": "raid_bdev1", 00:20:02.511 "uuid": "a74fea28-ddec-4c6b-aadb-10f26b7b8195", 00:20:02.511 "strip_size_kb": 0, 00:20:02.511 "state": "online", 00:20:02.511 "raid_level": "raid1", 00:20:02.511 "superblock": true, 00:20:02.511 "num_base_bdevs": 2, 00:20:02.511 "num_base_bdevs_discovered": 2, 00:20:02.511 "num_base_bdevs_operational": 2, 00:20:02.511 "base_bdevs_list": [ 00:20:02.511 { 00:20:02.511 "name": "pt1", 00:20:02.511 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:02.511 "is_configured": true, 00:20:02.511 "data_offset": 256, 00:20:02.511 "data_size": 7936 00:20:02.511 }, 00:20:02.511 { 00:20:02.511 "name": "pt2", 00:20:02.511 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:02.511 "is_configured": true, 00:20:02.511 "data_offset": 256, 00:20:02.511 "data_size": 7936 00:20:02.511 } 00:20:02.511 ] 00:20:02.511 }' 00:20:02.511 04:36:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.511 04:36:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.085 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:03.085 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:03.085 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:03.085 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:03.085 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:03.085 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:03.085 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:03.085 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:03.085 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.085 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.085 [2024-11-27 04:36:59.405300] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:03.085 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.085 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:03.085 "name": "raid_bdev1", 00:20:03.085 "aliases": [ 00:20:03.085 "a74fea28-ddec-4c6b-aadb-10f26b7b8195" 00:20:03.085 ], 00:20:03.085 "product_name": "Raid Volume", 00:20:03.085 "block_size": 4096, 00:20:03.085 "num_blocks": 7936, 00:20:03.085 "uuid": "a74fea28-ddec-4c6b-aadb-10f26b7b8195", 00:20:03.085 "md_size": 32, 00:20:03.085 "md_interleave": false, 00:20:03.085 "dif_type": 0, 00:20:03.085 "assigned_rate_limits": { 00:20:03.085 "rw_ios_per_sec": 0, 00:20:03.085 "rw_mbytes_per_sec": 0, 00:20:03.085 "r_mbytes_per_sec": 0, 00:20:03.085 "w_mbytes_per_sec": 0 00:20:03.085 }, 00:20:03.085 "claimed": false, 00:20:03.085 "zoned": false, 00:20:03.085 "supported_io_types": { 00:20:03.085 "read": true, 00:20:03.085 "write": true, 00:20:03.085 "unmap": false, 00:20:03.085 "flush": false, 00:20:03.085 "reset": true, 00:20:03.085 "nvme_admin": false, 00:20:03.085 "nvme_io": false, 00:20:03.085 "nvme_io_md": false, 00:20:03.085 "write_zeroes": true, 00:20:03.085 "zcopy": false, 00:20:03.085 "get_zone_info": false, 00:20:03.085 "zone_management": false, 00:20:03.085 "zone_append": false, 00:20:03.085 "compare": false, 00:20:03.085 "compare_and_write": false, 00:20:03.085 "abort": false, 00:20:03.085 "seek_hole": false, 00:20:03.085 "seek_data": false, 00:20:03.085 "copy": false, 00:20:03.085 "nvme_iov_md": false 00:20:03.085 }, 00:20:03.085 "memory_domains": [ 00:20:03.085 { 00:20:03.085 "dma_device_id": "system", 00:20:03.085 "dma_device_type": 1 00:20:03.085 }, 00:20:03.085 { 00:20:03.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.085 "dma_device_type": 2 00:20:03.085 }, 00:20:03.085 { 00:20:03.085 "dma_device_id": "system", 00:20:03.085 "dma_device_type": 1 00:20:03.085 }, 00:20:03.085 { 00:20:03.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.085 "dma_device_type": 2 00:20:03.085 } 00:20:03.085 ], 00:20:03.085 "driver_specific": { 00:20:03.085 "raid": { 00:20:03.085 "uuid": "a74fea28-ddec-4c6b-aadb-10f26b7b8195", 00:20:03.085 "strip_size_kb": 0, 00:20:03.085 "state": "online", 00:20:03.085 "raid_level": "raid1", 00:20:03.085 "superblock": true, 00:20:03.085 "num_base_bdevs": 2, 00:20:03.085 "num_base_bdevs_discovered": 2, 00:20:03.085 "num_base_bdevs_operational": 2, 00:20:03.085 "base_bdevs_list": [ 00:20:03.085 { 00:20:03.085 "name": "pt1", 00:20:03.085 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:03.085 "is_configured": true, 00:20:03.085 "data_offset": 256, 00:20:03.085 "data_size": 7936 00:20:03.085 }, 00:20:03.085 { 00:20:03.085 "name": "pt2", 00:20:03.085 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:03.085 "is_configured": true, 00:20:03.085 "data_offset": 256, 00:20:03.085 "data_size": 7936 00:20:03.085 } 00:20:03.085 ] 00:20:03.085 } 00:20:03.085 } 00:20:03.085 }' 00:20:03.085 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:03.085 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:03.085 pt2' 00:20:03.085 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:03.085 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:03.085 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:03.085 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:03.085 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.085 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.085 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:03.085 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.085 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:03.086 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:03.086 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:03.086 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:03.086 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:03.086 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.086 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.086 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.086 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:03.086 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:03.086 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:03.086 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:03.086 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.086 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.086 [2024-11-27 04:36:59.648839] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:03.086 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a74fea28-ddec-4c6b-aadb-10f26b7b8195 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z a74fea28-ddec-4c6b-aadb-10f26b7b8195 ']' 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.346 [2024-11-27 04:36:59.692416] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:03.346 [2024-11-27 04:36:59.692485] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:03.346 [2024-11-27 04:36:59.692627] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:03.346 [2024-11-27 04:36:59.692730] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:03.346 [2024-11-27 04:36:59.692777] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:03.346 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.347 [2024-11-27 04:36:59.836214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:03.347 [2024-11-27 04:36:59.838327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:03.347 [2024-11-27 04:36:59.838466] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:03.347 [2024-11-27 04:36:59.838576] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:03.347 [2024-11-27 04:36:59.838631] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:03.347 [2024-11-27 04:36:59.838669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:03.347 request: 00:20:03.347 { 00:20:03.347 "name": "raid_bdev1", 00:20:03.347 "raid_level": "raid1", 00:20:03.347 "base_bdevs": [ 00:20:03.347 "malloc1", 00:20:03.347 "malloc2" 00:20:03.347 ], 00:20:03.347 "superblock": false, 00:20:03.347 "method": "bdev_raid_create", 00:20:03.347 "req_id": 1 00:20:03.347 } 00:20:03.347 Got JSON-RPC error response 00:20:03.347 response: 00:20:03.347 { 00:20:03.347 "code": -17, 00:20:03.347 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:03.347 } 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.347 [2024-11-27 04:36:59.900062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:03.347 [2024-11-27 04:36:59.900141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.347 [2024-11-27 04:36:59.900159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:03.347 [2024-11-27 04:36:59.900170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.347 [2024-11-27 04:36:59.902170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.347 [2024-11-27 04:36:59.902260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:03.347 [2024-11-27 04:36:59.902326] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:03.347 [2024-11-27 04:36:59.902401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:03.347 pt1 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.347 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.607 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.607 "name": "raid_bdev1", 00:20:03.607 "uuid": "a74fea28-ddec-4c6b-aadb-10f26b7b8195", 00:20:03.607 "strip_size_kb": 0, 00:20:03.607 "state": "configuring", 00:20:03.607 "raid_level": "raid1", 00:20:03.607 "superblock": true, 00:20:03.607 "num_base_bdevs": 2, 00:20:03.607 "num_base_bdevs_discovered": 1, 00:20:03.607 "num_base_bdevs_operational": 2, 00:20:03.607 "base_bdevs_list": [ 00:20:03.607 { 00:20:03.607 "name": "pt1", 00:20:03.607 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:03.607 "is_configured": true, 00:20:03.607 "data_offset": 256, 00:20:03.607 "data_size": 7936 00:20:03.607 }, 00:20:03.607 { 00:20:03.607 "name": null, 00:20:03.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:03.607 "is_configured": false, 00:20:03.607 "data_offset": 256, 00:20:03.607 "data_size": 7936 00:20:03.607 } 00:20:03.607 ] 00:20:03.607 }' 00:20:03.607 04:36:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.607 04:36:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.867 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:03.867 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:03.867 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:03.867 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:03.867 04:37:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.867 04:37:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.867 [2024-11-27 04:37:00.379277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:03.867 [2024-11-27 04:37:00.379368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.867 [2024-11-27 04:37:00.379394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:03.867 [2024-11-27 04:37:00.379408] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.867 [2024-11-27 04:37:00.379677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.867 [2024-11-27 04:37:00.379699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:03.867 [2024-11-27 04:37:00.379762] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:03.867 [2024-11-27 04:37:00.379787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:03.867 [2024-11-27 04:37:00.379923] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:03.867 [2024-11-27 04:37:00.379936] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:03.867 [2024-11-27 04:37:00.380025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:03.867 [2024-11-27 04:37:00.380194] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:03.867 [2024-11-27 04:37:00.380205] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:03.867 [2024-11-27 04:37:00.380333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:03.867 pt2 00:20:03.867 04:37:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.867 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:03.867 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:03.867 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:03.867 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:03.867 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:03.867 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:03.867 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:03.867 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:03.867 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.867 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.867 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.867 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.867 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.867 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.867 04:37:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.867 04:37:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.867 04:37:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.867 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.867 "name": "raid_bdev1", 00:20:03.867 "uuid": "a74fea28-ddec-4c6b-aadb-10f26b7b8195", 00:20:03.867 "strip_size_kb": 0, 00:20:03.867 "state": "online", 00:20:03.867 "raid_level": "raid1", 00:20:03.867 "superblock": true, 00:20:03.867 "num_base_bdevs": 2, 00:20:03.867 "num_base_bdevs_discovered": 2, 00:20:03.867 "num_base_bdevs_operational": 2, 00:20:03.867 "base_bdevs_list": [ 00:20:03.867 { 00:20:03.867 "name": "pt1", 00:20:03.867 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:03.867 "is_configured": true, 00:20:03.867 "data_offset": 256, 00:20:03.867 "data_size": 7936 00:20:03.867 }, 00:20:03.867 { 00:20:03.867 "name": "pt2", 00:20:03.867 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:03.867 "is_configured": true, 00:20:03.867 "data_offset": 256, 00:20:03.867 "data_size": 7936 00:20:03.867 } 00:20:03.867 ] 00:20:03.867 }' 00:20:03.868 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.868 04:37:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:04.437 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:04.437 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:04.437 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:04.437 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:04.437 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:04.437 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:04.437 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:04.437 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:04.437 04:37:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.437 04:37:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:04.437 [2024-11-27 04:37:00.834762] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:04.437 04:37:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.437 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:04.437 "name": "raid_bdev1", 00:20:04.437 "aliases": [ 00:20:04.437 "a74fea28-ddec-4c6b-aadb-10f26b7b8195" 00:20:04.437 ], 00:20:04.437 "product_name": "Raid Volume", 00:20:04.437 "block_size": 4096, 00:20:04.437 "num_blocks": 7936, 00:20:04.437 "uuid": "a74fea28-ddec-4c6b-aadb-10f26b7b8195", 00:20:04.437 "md_size": 32, 00:20:04.437 "md_interleave": false, 00:20:04.437 "dif_type": 0, 00:20:04.437 "assigned_rate_limits": { 00:20:04.437 "rw_ios_per_sec": 0, 00:20:04.437 "rw_mbytes_per_sec": 0, 00:20:04.437 "r_mbytes_per_sec": 0, 00:20:04.437 "w_mbytes_per_sec": 0 00:20:04.437 }, 00:20:04.437 "claimed": false, 00:20:04.437 "zoned": false, 00:20:04.437 "supported_io_types": { 00:20:04.437 "read": true, 00:20:04.437 "write": true, 00:20:04.437 "unmap": false, 00:20:04.437 "flush": false, 00:20:04.437 "reset": true, 00:20:04.437 "nvme_admin": false, 00:20:04.437 "nvme_io": false, 00:20:04.437 "nvme_io_md": false, 00:20:04.437 "write_zeroes": true, 00:20:04.437 "zcopy": false, 00:20:04.437 "get_zone_info": false, 00:20:04.437 "zone_management": false, 00:20:04.437 "zone_append": false, 00:20:04.437 "compare": false, 00:20:04.437 "compare_and_write": false, 00:20:04.437 "abort": false, 00:20:04.437 "seek_hole": false, 00:20:04.437 "seek_data": false, 00:20:04.437 "copy": false, 00:20:04.437 "nvme_iov_md": false 00:20:04.437 }, 00:20:04.437 "memory_domains": [ 00:20:04.437 { 00:20:04.437 "dma_device_id": "system", 00:20:04.437 "dma_device_type": 1 00:20:04.437 }, 00:20:04.437 { 00:20:04.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.437 "dma_device_type": 2 00:20:04.437 }, 00:20:04.437 { 00:20:04.437 "dma_device_id": "system", 00:20:04.437 "dma_device_type": 1 00:20:04.437 }, 00:20:04.437 { 00:20:04.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.437 "dma_device_type": 2 00:20:04.437 } 00:20:04.437 ], 00:20:04.437 "driver_specific": { 00:20:04.437 "raid": { 00:20:04.437 "uuid": "a74fea28-ddec-4c6b-aadb-10f26b7b8195", 00:20:04.437 "strip_size_kb": 0, 00:20:04.437 "state": "online", 00:20:04.437 "raid_level": "raid1", 00:20:04.437 "superblock": true, 00:20:04.437 "num_base_bdevs": 2, 00:20:04.437 "num_base_bdevs_discovered": 2, 00:20:04.437 "num_base_bdevs_operational": 2, 00:20:04.437 "base_bdevs_list": [ 00:20:04.437 { 00:20:04.437 "name": "pt1", 00:20:04.437 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:04.437 "is_configured": true, 00:20:04.437 "data_offset": 256, 00:20:04.437 "data_size": 7936 00:20:04.437 }, 00:20:04.437 { 00:20:04.437 "name": "pt2", 00:20:04.437 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:04.437 "is_configured": true, 00:20:04.437 "data_offset": 256, 00:20:04.437 "data_size": 7936 00:20:04.437 } 00:20:04.437 ] 00:20:04.437 } 00:20:04.437 } 00:20:04.437 }' 00:20:04.437 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:04.437 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:04.437 pt2' 00:20:04.437 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:04.437 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:04.437 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:04.437 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:04.437 04:37:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:04.437 04:37:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.437 04:37:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:04.437 04:37:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.437 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:04.437 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:04.437 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:04.437 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:04.438 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:04.438 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.438 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:04.698 [2024-11-27 04:37:01.050477] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' a74fea28-ddec-4c6b-aadb-10f26b7b8195 '!=' a74fea28-ddec-4c6b-aadb-10f26b7b8195 ']' 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:04.698 [2024-11-27 04:37:01.098138] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.698 "name": "raid_bdev1", 00:20:04.698 "uuid": "a74fea28-ddec-4c6b-aadb-10f26b7b8195", 00:20:04.698 "strip_size_kb": 0, 00:20:04.698 "state": "online", 00:20:04.698 "raid_level": "raid1", 00:20:04.698 "superblock": true, 00:20:04.698 "num_base_bdevs": 2, 00:20:04.698 "num_base_bdevs_discovered": 1, 00:20:04.698 "num_base_bdevs_operational": 1, 00:20:04.698 "base_bdevs_list": [ 00:20:04.698 { 00:20:04.698 "name": null, 00:20:04.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.698 "is_configured": false, 00:20:04.698 "data_offset": 0, 00:20:04.698 "data_size": 7936 00:20:04.698 }, 00:20:04.698 { 00:20:04.698 "name": "pt2", 00:20:04.698 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:04.698 "is_configured": true, 00:20:04.698 "data_offset": 256, 00:20:04.698 "data_size": 7936 00:20:04.698 } 00:20:04.698 ] 00:20:04.698 }' 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.698 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:04.958 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:04.958 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.958 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:04.958 [2024-11-27 04:37:01.521353] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:04.958 [2024-11-27 04:37:01.521388] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:04.958 [2024-11-27 04:37:01.521478] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:04.958 [2024-11-27 04:37:01.521532] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:04.958 [2024-11-27 04:37:01.521544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:04.958 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.958 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:04.958 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.958 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.958 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:04.958 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.217 [2024-11-27 04:37:01.581258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:05.217 [2024-11-27 04:37:01.581331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:05.217 [2024-11-27 04:37:01.581351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:05.217 [2024-11-27 04:37:01.581364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:05.217 [2024-11-27 04:37:01.583692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:05.217 [2024-11-27 04:37:01.583740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:05.217 [2024-11-27 04:37:01.583807] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:05.217 [2024-11-27 04:37:01.583861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:05.217 [2024-11-27 04:37:01.583971] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:05.217 [2024-11-27 04:37:01.583986] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:05.217 [2024-11-27 04:37:01.584070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:05.217 [2024-11-27 04:37:01.584217] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:05.217 [2024-11-27 04:37:01.584228] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:05.217 [2024-11-27 04:37:01.584352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:05.217 pt2 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.217 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.218 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.218 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.218 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.218 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.218 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.218 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.218 "name": "raid_bdev1", 00:20:05.218 "uuid": "a74fea28-ddec-4c6b-aadb-10f26b7b8195", 00:20:05.218 "strip_size_kb": 0, 00:20:05.218 "state": "online", 00:20:05.218 "raid_level": "raid1", 00:20:05.218 "superblock": true, 00:20:05.218 "num_base_bdevs": 2, 00:20:05.218 "num_base_bdevs_discovered": 1, 00:20:05.218 "num_base_bdevs_operational": 1, 00:20:05.218 "base_bdevs_list": [ 00:20:05.218 { 00:20:05.218 "name": null, 00:20:05.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.218 "is_configured": false, 00:20:05.218 "data_offset": 256, 00:20:05.218 "data_size": 7936 00:20:05.218 }, 00:20:05.218 { 00:20:05.218 "name": "pt2", 00:20:05.218 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:05.218 "is_configured": true, 00:20:05.218 "data_offset": 256, 00:20:05.218 "data_size": 7936 00:20:05.218 } 00:20:05.218 ] 00:20:05.218 }' 00:20:05.218 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.218 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.477 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:05.477 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.477 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.477 [2024-11-27 04:37:01.992533] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:05.477 [2024-11-27 04:37:01.992647] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:05.477 [2024-11-27 04:37:01.992760] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:05.477 [2024-11-27 04:37:01.992847] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:05.477 [2024-11-27 04:37:01.992895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:05.477 04:37:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.477 04:37:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.477 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:05.477 04:37:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.477 04:37:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.477 04:37:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.477 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:05.477 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:05.478 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:05.478 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:05.478 04:37:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.478 04:37:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.478 [2024-11-27 04:37:02.052477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:05.478 [2024-11-27 04:37:02.052618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:05.478 [2024-11-27 04:37:02.052670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:05.478 [2024-11-27 04:37:02.052712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:05.478 [2024-11-27 04:37:02.054917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:05.478 [2024-11-27 04:37:02.055006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:05.478 [2024-11-27 04:37:02.055141] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:05.478 [2024-11-27 04:37:02.055232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:05.478 [2024-11-27 04:37:02.055432] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:05.478 [2024-11-27 04:37:02.055496] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:05.478 [2024-11-27 04:37:02.055596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:05.478 [2024-11-27 04:37:02.055754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:05.478 [2024-11-27 04:37:02.055872] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:05.478 [2024-11-27 04:37:02.055911] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:05.478 [2024-11-27 04:37:02.056006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:05.478 [2024-11-27 04:37:02.056162] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:05.478 [2024-11-27 04:37:02.056207] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:05.478 [2024-11-27 04:37:02.056414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:05.478 pt1 00:20:05.478 04:37:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.478 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:05.478 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:05.478 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:05.478 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:05.478 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:05.478 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:05.478 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:05.478 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.478 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.478 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.478 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.737 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.737 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.737 04:37:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.737 04:37:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.737 04:37:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.737 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.737 "name": "raid_bdev1", 00:20:05.737 "uuid": "a74fea28-ddec-4c6b-aadb-10f26b7b8195", 00:20:05.737 "strip_size_kb": 0, 00:20:05.737 "state": "online", 00:20:05.737 "raid_level": "raid1", 00:20:05.737 "superblock": true, 00:20:05.737 "num_base_bdevs": 2, 00:20:05.737 "num_base_bdevs_discovered": 1, 00:20:05.737 "num_base_bdevs_operational": 1, 00:20:05.737 "base_bdevs_list": [ 00:20:05.737 { 00:20:05.737 "name": null, 00:20:05.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.737 "is_configured": false, 00:20:05.737 "data_offset": 256, 00:20:05.737 "data_size": 7936 00:20:05.737 }, 00:20:05.737 { 00:20:05.737 "name": "pt2", 00:20:05.737 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:05.737 "is_configured": true, 00:20:05.737 "data_offset": 256, 00:20:05.737 "data_size": 7936 00:20:05.737 } 00:20:05.737 ] 00:20:05.737 }' 00:20:05.737 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.737 04:37:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.996 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:05.996 04:37:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.996 04:37:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.996 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:05.996 04:37:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.996 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:05.996 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:05.996 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:05.996 04:37:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.996 04:37:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.996 [2024-11-27 04:37:02.511995] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:05.996 04:37:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.996 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' a74fea28-ddec-4c6b-aadb-10f26b7b8195 '!=' a74fea28-ddec-4c6b-aadb-10f26b7b8195 ']' 00:20:05.996 04:37:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87867 00:20:05.996 04:37:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87867 ']' 00:20:05.996 04:37:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87867 00:20:05.996 04:37:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:20:05.996 04:37:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.996 04:37:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87867 00:20:06.257 04:37:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:06.257 killing process with pid 87867 00:20:06.257 04:37:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:06.257 04:37:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87867' 00:20:06.257 04:37:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87867 00:20:06.257 [2024-11-27 04:37:02.594572] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:06.257 [2024-11-27 04:37:02.594671] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:06.257 [2024-11-27 04:37:02.594719] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:06.257 [2024-11-27 04:37:02.594737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:06.257 04:37:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87867 00:20:06.257 [2024-11-27 04:37:02.832281] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:07.634 04:37:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:20:07.634 00:20:07.634 real 0m6.127s 00:20:07.634 user 0m9.206s 00:20:07.634 sys 0m1.125s 00:20:07.634 ************************************ 00:20:07.634 END TEST raid_superblock_test_md_separate 00:20:07.634 ************************************ 00:20:07.634 04:37:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.634 04:37:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:07.634 04:37:04 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:20:07.634 04:37:04 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:20:07.634 04:37:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:07.634 04:37:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.634 04:37:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:07.634 ************************************ 00:20:07.634 START TEST raid_rebuild_test_sb_md_separate 00:20:07.634 ************************************ 00:20:07.634 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:20:07.634 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:07.634 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:07.634 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:07.634 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:07.634 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88194 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88194 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88194 ']' 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.635 04:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:07.635 [2024-11-27 04:37:04.181777] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:07.635 [2024-11-27 04:37:04.182048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88194 ] 00:20:07.635 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:07.635 Zero copy mechanism will not be used. 00:20:07.893 [2024-11-27 04:37:04.360323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.151 [2024-11-27 04:37:04.478828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.151 [2024-11-27 04:37:04.679993] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:08.151 [2024-11-27 04:37:04.680060] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.719 BaseBdev1_malloc 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.719 [2024-11-27 04:37:05.166923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:08.719 [2024-11-27 04:37:05.167019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.719 [2024-11-27 04:37:05.167049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:08.719 [2024-11-27 04:37:05.167063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.719 [2024-11-27 04:37:05.169278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.719 [2024-11-27 04:37:05.169325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:08.719 BaseBdev1 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.719 BaseBdev2_malloc 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.719 [2024-11-27 04:37:05.224782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:08.719 [2024-11-27 04:37:05.224855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.719 [2024-11-27 04:37:05.224878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:08.719 [2024-11-27 04:37:05.224892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.719 [2024-11-27 04:37:05.227135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.719 [2024-11-27 04:37:05.227174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:08.719 BaseBdev2 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.719 spare_malloc 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.719 spare_delay 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.719 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.978 [2024-11-27 04:37:05.307583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:08.978 [2024-11-27 04:37:05.307773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.978 [2024-11-27 04:37:05.307814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:08.978 [2024-11-27 04:37:05.307829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.978 [2024-11-27 04:37:05.310287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.978 [2024-11-27 04:37:05.310331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:08.978 spare 00:20:08.978 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.978 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:08.978 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.978 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.978 [2024-11-27 04:37:05.319563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:08.978 [2024-11-27 04:37:05.321532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:08.978 [2024-11-27 04:37:05.321734] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:08.978 [2024-11-27 04:37:05.321752] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:08.978 [2024-11-27 04:37:05.321854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:08.978 [2024-11-27 04:37:05.322001] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:08.978 [2024-11-27 04:37:05.322011] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:08.978 [2024-11-27 04:37:05.322181] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:08.978 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.978 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:08.978 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:08.978 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:08.978 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:08.978 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:08.978 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:08.978 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.978 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.978 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.978 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.978 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.978 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.978 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.978 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.978 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.978 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.978 "name": "raid_bdev1", 00:20:08.978 "uuid": "b76bf0bc-58aa-4f1d-97e9-571fe9b7fc94", 00:20:08.978 "strip_size_kb": 0, 00:20:08.978 "state": "online", 00:20:08.978 "raid_level": "raid1", 00:20:08.978 "superblock": true, 00:20:08.978 "num_base_bdevs": 2, 00:20:08.978 "num_base_bdevs_discovered": 2, 00:20:08.978 "num_base_bdevs_operational": 2, 00:20:08.978 "base_bdevs_list": [ 00:20:08.978 { 00:20:08.978 "name": "BaseBdev1", 00:20:08.978 "uuid": "8220f467-78bd-5d7a-99f9-970adb45449e", 00:20:08.978 "is_configured": true, 00:20:08.978 "data_offset": 256, 00:20:08.978 "data_size": 7936 00:20:08.978 }, 00:20:08.978 { 00:20:08.978 "name": "BaseBdev2", 00:20:08.978 "uuid": "c793e93d-722c-558c-8431-6b4c5690aa78", 00:20:08.978 "is_configured": true, 00:20:08.978 "data_offset": 256, 00:20:08.978 "data_size": 7936 00:20:08.978 } 00:20:08.978 ] 00:20:08.978 }' 00:20:08.978 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.978 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.237 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:09.237 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.237 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.237 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:09.237 [2024-11-27 04:37:05.803051] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:09.237 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.527 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:09.527 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.527 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.527 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.527 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:09.527 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.527 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:09.527 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:09.527 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:09.527 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:09.527 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:09.527 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:09.527 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:09.527 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:09.527 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:09.527 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:09.527 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:20:09.527 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:09.527 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:09.527 04:37:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:09.527 [2024-11-27 04:37:06.066359] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:09.527 /dev/nbd0 00:20:09.528 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:09.528 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:09.790 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:09.790 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:20:09.790 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:09.790 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:09.790 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:09.790 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:20:09.790 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:09.790 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:09.790 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:09.790 1+0 records in 00:20:09.790 1+0 records out 00:20:09.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393036 s, 10.4 MB/s 00:20:09.790 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:09.790 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:20:09.790 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:09.790 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:09.790 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:20:09.790 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:09.790 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:09.790 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:09.790 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:09.790 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:20:10.357 7936+0 records in 00:20:10.357 7936+0 records out 00:20:10.357 32505856 bytes (33 MB, 31 MiB) copied, 0.697821 s, 46.6 MB/s 00:20:10.357 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:10.357 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:10.357 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:10.357 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:10.357 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:20:10.357 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:10.357 04:37:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:10.616 [2024-11-27 04:37:07.073217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:10.616 [2024-11-27 04:37:07.089646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.616 "name": "raid_bdev1", 00:20:10.616 "uuid": "b76bf0bc-58aa-4f1d-97e9-571fe9b7fc94", 00:20:10.616 "strip_size_kb": 0, 00:20:10.616 "state": "online", 00:20:10.616 "raid_level": "raid1", 00:20:10.616 "superblock": true, 00:20:10.616 "num_base_bdevs": 2, 00:20:10.616 "num_base_bdevs_discovered": 1, 00:20:10.616 "num_base_bdevs_operational": 1, 00:20:10.616 "base_bdevs_list": [ 00:20:10.616 { 00:20:10.616 "name": null, 00:20:10.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.616 "is_configured": false, 00:20:10.616 "data_offset": 0, 00:20:10.616 "data_size": 7936 00:20:10.616 }, 00:20:10.616 { 00:20:10.616 "name": "BaseBdev2", 00:20:10.616 "uuid": "c793e93d-722c-558c-8431-6b4c5690aa78", 00:20:10.616 "is_configured": true, 00:20:10.616 "data_offset": 256, 00:20:10.616 "data_size": 7936 00:20:10.616 } 00:20:10.616 ] 00:20:10.616 }' 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.616 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:11.183 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:11.183 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.183 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:11.183 [2024-11-27 04:37:07.588850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:11.183 [2024-11-27 04:37:07.605005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:20:11.183 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.183 04:37:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:11.183 [2024-11-27 04:37:07.607233] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:12.118 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:12.118 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:12.118 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:12.118 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:12.118 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:12.118 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.118 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.118 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.118 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:12.118 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.118 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:12.118 "name": "raid_bdev1", 00:20:12.118 "uuid": "b76bf0bc-58aa-4f1d-97e9-571fe9b7fc94", 00:20:12.118 "strip_size_kb": 0, 00:20:12.118 "state": "online", 00:20:12.118 "raid_level": "raid1", 00:20:12.118 "superblock": true, 00:20:12.118 "num_base_bdevs": 2, 00:20:12.118 "num_base_bdevs_discovered": 2, 00:20:12.118 "num_base_bdevs_operational": 2, 00:20:12.118 "process": { 00:20:12.118 "type": "rebuild", 00:20:12.118 "target": "spare", 00:20:12.118 "progress": { 00:20:12.119 "blocks": 2560, 00:20:12.119 "percent": 32 00:20:12.119 } 00:20:12.119 }, 00:20:12.119 "base_bdevs_list": [ 00:20:12.119 { 00:20:12.119 "name": "spare", 00:20:12.119 "uuid": "4e6be7eb-c3ce-5cd9-b85d-030642ca3e20", 00:20:12.119 "is_configured": true, 00:20:12.119 "data_offset": 256, 00:20:12.119 "data_size": 7936 00:20:12.119 }, 00:20:12.119 { 00:20:12.119 "name": "BaseBdev2", 00:20:12.119 "uuid": "c793e93d-722c-558c-8431-6b4c5690aa78", 00:20:12.119 "is_configured": true, 00:20:12.119 "data_offset": 256, 00:20:12.119 "data_size": 7936 00:20:12.119 } 00:20:12.119 ] 00:20:12.119 }' 00:20:12.119 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:12.377 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:12.377 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:12.377 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:12.377 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:12.377 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.377 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:12.377 [2024-11-27 04:37:08.766887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:12.377 [2024-11-27 04:37:08.813310] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:12.377 [2024-11-27 04:37:08.813531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:12.377 [2024-11-27 04:37:08.813563] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:12.377 [2024-11-27 04:37:08.813579] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:12.377 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.377 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:12.377 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:12.377 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:12.377 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:12.377 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:12.377 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:12.377 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.377 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.377 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.377 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.377 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.377 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.377 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.377 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:12.377 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.377 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.377 "name": "raid_bdev1", 00:20:12.377 "uuid": "b76bf0bc-58aa-4f1d-97e9-571fe9b7fc94", 00:20:12.377 "strip_size_kb": 0, 00:20:12.377 "state": "online", 00:20:12.377 "raid_level": "raid1", 00:20:12.377 "superblock": true, 00:20:12.377 "num_base_bdevs": 2, 00:20:12.377 "num_base_bdevs_discovered": 1, 00:20:12.377 "num_base_bdevs_operational": 1, 00:20:12.377 "base_bdevs_list": [ 00:20:12.377 { 00:20:12.377 "name": null, 00:20:12.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.377 "is_configured": false, 00:20:12.377 "data_offset": 0, 00:20:12.377 "data_size": 7936 00:20:12.377 }, 00:20:12.377 { 00:20:12.377 "name": "BaseBdev2", 00:20:12.377 "uuid": "c793e93d-722c-558c-8431-6b4c5690aa78", 00:20:12.377 "is_configured": true, 00:20:12.377 "data_offset": 256, 00:20:12.377 "data_size": 7936 00:20:12.377 } 00:20:12.377 ] 00:20:12.377 }' 00:20:12.377 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.377 04:37:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:12.948 04:37:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:12.948 04:37:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:12.948 04:37:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:12.948 04:37:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:12.948 04:37:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:12.948 04:37:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.948 04:37:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.948 04:37:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.948 04:37:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:12.948 04:37:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.948 04:37:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:12.948 "name": "raid_bdev1", 00:20:12.948 "uuid": "b76bf0bc-58aa-4f1d-97e9-571fe9b7fc94", 00:20:12.948 "strip_size_kb": 0, 00:20:12.948 "state": "online", 00:20:12.948 "raid_level": "raid1", 00:20:12.948 "superblock": true, 00:20:12.948 "num_base_bdevs": 2, 00:20:12.948 "num_base_bdevs_discovered": 1, 00:20:12.948 "num_base_bdevs_operational": 1, 00:20:12.948 "base_bdevs_list": [ 00:20:12.948 { 00:20:12.948 "name": null, 00:20:12.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.948 "is_configured": false, 00:20:12.948 "data_offset": 0, 00:20:12.948 "data_size": 7936 00:20:12.948 }, 00:20:12.948 { 00:20:12.948 "name": "BaseBdev2", 00:20:12.948 "uuid": "c793e93d-722c-558c-8431-6b4c5690aa78", 00:20:12.948 "is_configured": true, 00:20:12.948 "data_offset": 256, 00:20:12.948 "data_size": 7936 00:20:12.948 } 00:20:12.948 ] 00:20:12.948 }' 00:20:12.949 04:37:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:12.949 04:37:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:12.949 04:37:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:12.949 04:37:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:12.949 04:37:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:12.949 04:37:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.949 04:37:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:12.949 [2024-11-27 04:37:09.419994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:12.949 [2024-11-27 04:37:09.436036] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:20:12.949 04:37:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.949 04:37:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:12.949 [2024-11-27 04:37:09.438101] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:13.887 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:13.887 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:13.887 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:13.887 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:13.887 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:13.887 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.887 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.887 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.887 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:13.887 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.146 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:14.146 "name": "raid_bdev1", 00:20:14.146 "uuid": "b76bf0bc-58aa-4f1d-97e9-571fe9b7fc94", 00:20:14.146 "strip_size_kb": 0, 00:20:14.146 "state": "online", 00:20:14.146 "raid_level": "raid1", 00:20:14.146 "superblock": true, 00:20:14.146 "num_base_bdevs": 2, 00:20:14.146 "num_base_bdevs_discovered": 2, 00:20:14.146 "num_base_bdevs_operational": 2, 00:20:14.146 "process": { 00:20:14.146 "type": "rebuild", 00:20:14.146 "target": "spare", 00:20:14.146 "progress": { 00:20:14.146 "blocks": 2560, 00:20:14.146 "percent": 32 00:20:14.146 } 00:20:14.146 }, 00:20:14.146 "base_bdevs_list": [ 00:20:14.146 { 00:20:14.146 "name": "spare", 00:20:14.146 "uuid": "4e6be7eb-c3ce-5cd9-b85d-030642ca3e20", 00:20:14.146 "is_configured": true, 00:20:14.146 "data_offset": 256, 00:20:14.146 "data_size": 7936 00:20:14.146 }, 00:20:14.146 { 00:20:14.146 "name": "BaseBdev2", 00:20:14.146 "uuid": "c793e93d-722c-558c-8431-6b4c5690aa78", 00:20:14.146 "is_configured": true, 00:20:14.146 "data_offset": 256, 00:20:14.146 "data_size": 7936 00:20:14.146 } 00:20:14.146 ] 00:20:14.146 }' 00:20:14.146 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:14.146 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:14.146 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:14.146 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:14.146 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:14.146 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:14.146 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:14.146 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:14.146 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:14.146 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:14.146 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=738 00:20:14.146 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:14.146 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:14.146 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:14.146 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:14.146 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:14.146 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:14.146 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.146 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.146 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:14.146 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.146 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.146 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:14.146 "name": "raid_bdev1", 00:20:14.146 "uuid": "b76bf0bc-58aa-4f1d-97e9-571fe9b7fc94", 00:20:14.146 "strip_size_kb": 0, 00:20:14.146 "state": "online", 00:20:14.146 "raid_level": "raid1", 00:20:14.146 "superblock": true, 00:20:14.146 "num_base_bdevs": 2, 00:20:14.146 "num_base_bdevs_discovered": 2, 00:20:14.146 "num_base_bdevs_operational": 2, 00:20:14.146 "process": { 00:20:14.146 "type": "rebuild", 00:20:14.147 "target": "spare", 00:20:14.147 "progress": { 00:20:14.147 "blocks": 2816, 00:20:14.147 "percent": 35 00:20:14.147 } 00:20:14.147 }, 00:20:14.147 "base_bdevs_list": [ 00:20:14.147 { 00:20:14.147 "name": "spare", 00:20:14.147 "uuid": "4e6be7eb-c3ce-5cd9-b85d-030642ca3e20", 00:20:14.147 "is_configured": true, 00:20:14.147 "data_offset": 256, 00:20:14.147 "data_size": 7936 00:20:14.147 }, 00:20:14.147 { 00:20:14.147 "name": "BaseBdev2", 00:20:14.147 "uuid": "c793e93d-722c-558c-8431-6b4c5690aa78", 00:20:14.147 "is_configured": true, 00:20:14.147 "data_offset": 256, 00:20:14.147 "data_size": 7936 00:20:14.147 } 00:20:14.147 ] 00:20:14.147 }' 00:20:14.147 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:14.147 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:14.147 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:14.407 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:14.407 04:37:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:15.346 04:37:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:15.346 04:37:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:15.346 04:37:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:15.346 04:37:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:15.346 04:37:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:15.346 04:37:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:15.346 04:37:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.346 04:37:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.346 04:37:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.346 04:37:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.346 04:37:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.346 04:37:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:15.346 "name": "raid_bdev1", 00:20:15.346 "uuid": "b76bf0bc-58aa-4f1d-97e9-571fe9b7fc94", 00:20:15.346 "strip_size_kb": 0, 00:20:15.346 "state": "online", 00:20:15.346 "raid_level": "raid1", 00:20:15.346 "superblock": true, 00:20:15.346 "num_base_bdevs": 2, 00:20:15.346 "num_base_bdevs_discovered": 2, 00:20:15.346 "num_base_bdevs_operational": 2, 00:20:15.346 "process": { 00:20:15.346 "type": "rebuild", 00:20:15.346 "target": "spare", 00:20:15.346 "progress": { 00:20:15.346 "blocks": 5888, 00:20:15.346 "percent": 74 00:20:15.346 } 00:20:15.346 }, 00:20:15.346 "base_bdevs_list": [ 00:20:15.346 { 00:20:15.346 "name": "spare", 00:20:15.346 "uuid": "4e6be7eb-c3ce-5cd9-b85d-030642ca3e20", 00:20:15.346 "is_configured": true, 00:20:15.346 "data_offset": 256, 00:20:15.346 "data_size": 7936 00:20:15.346 }, 00:20:15.346 { 00:20:15.346 "name": "BaseBdev2", 00:20:15.346 "uuid": "c793e93d-722c-558c-8431-6b4c5690aa78", 00:20:15.346 "is_configured": true, 00:20:15.346 "data_offset": 256, 00:20:15.346 "data_size": 7936 00:20:15.346 } 00:20:15.346 ] 00:20:15.346 }' 00:20:15.346 04:37:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:15.346 04:37:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:15.346 04:37:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:15.346 04:37:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:15.346 04:37:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:16.285 [2024-11-27 04:37:12.553621] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:16.286 [2024-11-27 04:37:12.553816] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:16.286 [2024-11-27 04:37:12.554015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:16.545 04:37:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:16.545 04:37:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.545 04:37:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:16.545 04:37:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:16.545 04:37:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:16.545 04:37:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:16.545 04:37:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.545 04:37:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.545 04:37:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:16.545 04:37:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.545 04:37:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.545 04:37:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:16.545 "name": "raid_bdev1", 00:20:16.545 "uuid": "b76bf0bc-58aa-4f1d-97e9-571fe9b7fc94", 00:20:16.545 "strip_size_kb": 0, 00:20:16.545 "state": "online", 00:20:16.545 "raid_level": "raid1", 00:20:16.545 "superblock": true, 00:20:16.545 "num_base_bdevs": 2, 00:20:16.545 "num_base_bdevs_discovered": 2, 00:20:16.545 "num_base_bdevs_operational": 2, 00:20:16.545 "base_bdevs_list": [ 00:20:16.545 { 00:20:16.545 "name": "spare", 00:20:16.545 "uuid": "4e6be7eb-c3ce-5cd9-b85d-030642ca3e20", 00:20:16.545 "is_configured": true, 00:20:16.546 "data_offset": 256, 00:20:16.546 "data_size": 7936 00:20:16.546 }, 00:20:16.546 { 00:20:16.546 "name": "BaseBdev2", 00:20:16.546 "uuid": "c793e93d-722c-558c-8431-6b4c5690aa78", 00:20:16.546 "is_configured": true, 00:20:16.546 "data_offset": 256, 00:20:16.546 "data_size": 7936 00:20:16.546 } 00:20:16.546 ] 00:20:16.546 }' 00:20:16.546 04:37:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:16.546 04:37:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:16.546 04:37:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:16.546 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:16.546 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:20:16.546 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:16.546 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:16.546 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:16.546 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:16.546 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:16.546 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.546 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.546 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.546 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:16.546 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.546 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:16.546 "name": "raid_bdev1", 00:20:16.546 "uuid": "b76bf0bc-58aa-4f1d-97e9-571fe9b7fc94", 00:20:16.546 "strip_size_kb": 0, 00:20:16.546 "state": "online", 00:20:16.546 "raid_level": "raid1", 00:20:16.546 "superblock": true, 00:20:16.546 "num_base_bdevs": 2, 00:20:16.546 "num_base_bdevs_discovered": 2, 00:20:16.546 "num_base_bdevs_operational": 2, 00:20:16.546 "base_bdevs_list": [ 00:20:16.546 { 00:20:16.546 "name": "spare", 00:20:16.546 "uuid": "4e6be7eb-c3ce-5cd9-b85d-030642ca3e20", 00:20:16.546 "is_configured": true, 00:20:16.546 "data_offset": 256, 00:20:16.546 "data_size": 7936 00:20:16.546 }, 00:20:16.546 { 00:20:16.546 "name": "BaseBdev2", 00:20:16.546 "uuid": "c793e93d-722c-558c-8431-6b4c5690aa78", 00:20:16.546 "is_configured": true, 00:20:16.546 "data_offset": 256, 00:20:16.546 "data_size": 7936 00:20:16.546 } 00:20:16.546 ] 00:20:16.546 }' 00:20:16.546 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:16.546 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:16.546 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:16.808 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:16.808 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:16.808 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:16.808 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:16.808 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:16.808 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:16.808 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:16.808 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.808 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.808 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.808 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.808 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.808 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.808 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.808 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:16.808 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.808 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.808 "name": "raid_bdev1", 00:20:16.808 "uuid": "b76bf0bc-58aa-4f1d-97e9-571fe9b7fc94", 00:20:16.808 "strip_size_kb": 0, 00:20:16.808 "state": "online", 00:20:16.808 "raid_level": "raid1", 00:20:16.808 "superblock": true, 00:20:16.808 "num_base_bdevs": 2, 00:20:16.808 "num_base_bdevs_discovered": 2, 00:20:16.808 "num_base_bdevs_operational": 2, 00:20:16.808 "base_bdevs_list": [ 00:20:16.808 { 00:20:16.808 "name": "spare", 00:20:16.808 "uuid": "4e6be7eb-c3ce-5cd9-b85d-030642ca3e20", 00:20:16.808 "is_configured": true, 00:20:16.808 "data_offset": 256, 00:20:16.808 "data_size": 7936 00:20:16.808 }, 00:20:16.808 { 00:20:16.808 "name": "BaseBdev2", 00:20:16.808 "uuid": "c793e93d-722c-558c-8431-6b4c5690aa78", 00:20:16.808 "is_configured": true, 00:20:16.808 "data_offset": 256, 00:20:16.808 "data_size": 7936 00:20:16.808 } 00:20:16.808 ] 00:20:16.808 }' 00:20:16.808 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.808 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:17.377 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:17.377 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.377 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:17.377 [2024-11-27 04:37:13.683044] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:17.377 [2024-11-27 04:37:13.683081] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:17.377 [2024-11-27 04:37:13.683194] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:17.377 [2024-11-27 04:37:13.683261] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:17.377 [2024-11-27 04:37:13.683273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:17.377 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.377 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:20:17.377 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.377 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.377 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:17.377 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.377 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:17.377 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:17.377 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:17.377 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:17.377 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:17.377 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:17.377 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:17.377 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:17.377 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:17.377 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:20:17.377 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:17.377 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:17.377 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:17.377 /dev/nbd0 00:20:17.637 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:17.637 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:17.637 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:17.637 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:20:17.637 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:17.637 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:17.637 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:17.637 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:20:17.637 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:17.637 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:17.637 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:17.637 1+0 records in 00:20:17.637 1+0 records out 00:20:17.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470571 s, 8.7 MB/s 00:20:17.637 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:17.637 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:20:17.637 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:17.637 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:17.637 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:20:17.637 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:17.637 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:17.637 04:37:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:17.637 /dev/nbd1 00:20:17.898 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:17.898 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:17.898 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:17.898 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:20:17.898 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:17.898 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:17.898 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:17.898 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:20:17.898 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:17.898 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:17.898 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:17.898 1+0 records in 00:20:17.898 1+0 records out 00:20:17.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466601 s, 8.8 MB/s 00:20:17.898 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:17.898 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:20:17.898 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:17.898 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:17.898 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:20:17.898 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:17.898 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:17.898 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:17.898 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:17.898 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:17.898 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:17.898 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:17.898 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:20:17.898 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:17.898 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:18.158 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:18.158 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:18.159 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:18.159 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:18.159 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:18.159 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:18.159 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:18.159 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:18.159 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:18.159 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:18.419 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:18.419 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:18.419 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:18.419 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:18.419 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:18.419 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:18.419 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:18.420 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:18.420 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:18.420 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:18.420 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.420 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:18.420 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.420 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:18.420 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.420 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:18.420 [2024-11-27 04:37:14.917784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:18.420 [2024-11-27 04:37:14.917893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.420 [2024-11-27 04:37:14.917936] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:18.420 [2024-11-27 04:37:14.917972] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.420 [2024-11-27 04:37:14.920241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.420 [2024-11-27 04:37:14.920322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:18.420 [2024-11-27 04:37:14.920424] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:18.420 [2024-11-27 04:37:14.920523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:18.420 [2024-11-27 04:37:14.920723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:18.420 spare 00:20:18.420 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.420 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:18.420 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.420 04:37:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:18.680 [2024-11-27 04:37:15.020671] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:18.680 [2024-11-27 04:37:15.020776] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:18.680 [2024-11-27 04:37:15.020912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:20:18.680 [2024-11-27 04:37:15.021112] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:18.680 [2024-11-27 04:37:15.021124] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:18.680 [2024-11-27 04:37:15.021286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.680 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.680 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:18.680 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:18.680 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:18.680 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:18.680 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:18.680 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:18.680 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.680 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.680 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.680 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.680 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.680 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.680 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.680 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:18.680 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.680 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.680 "name": "raid_bdev1", 00:20:18.680 "uuid": "b76bf0bc-58aa-4f1d-97e9-571fe9b7fc94", 00:20:18.680 "strip_size_kb": 0, 00:20:18.680 "state": "online", 00:20:18.680 "raid_level": "raid1", 00:20:18.680 "superblock": true, 00:20:18.680 "num_base_bdevs": 2, 00:20:18.680 "num_base_bdevs_discovered": 2, 00:20:18.680 "num_base_bdevs_operational": 2, 00:20:18.680 "base_bdevs_list": [ 00:20:18.680 { 00:20:18.680 "name": "spare", 00:20:18.680 "uuid": "4e6be7eb-c3ce-5cd9-b85d-030642ca3e20", 00:20:18.680 "is_configured": true, 00:20:18.680 "data_offset": 256, 00:20:18.680 "data_size": 7936 00:20:18.680 }, 00:20:18.680 { 00:20:18.680 "name": "BaseBdev2", 00:20:18.680 "uuid": "c793e93d-722c-558c-8431-6b4c5690aa78", 00:20:18.680 "is_configured": true, 00:20:18.680 "data_offset": 256, 00:20:18.680 "data_size": 7936 00:20:18.680 } 00:20:18.680 ] 00:20:18.680 }' 00:20:18.680 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.680 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:18.940 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:18.940 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:18.940 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:18.940 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:18.940 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:18.940 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.940 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.940 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.940 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:18.940 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.200 "name": "raid_bdev1", 00:20:19.200 "uuid": "b76bf0bc-58aa-4f1d-97e9-571fe9b7fc94", 00:20:19.200 "strip_size_kb": 0, 00:20:19.200 "state": "online", 00:20:19.200 "raid_level": "raid1", 00:20:19.200 "superblock": true, 00:20:19.200 "num_base_bdevs": 2, 00:20:19.200 "num_base_bdevs_discovered": 2, 00:20:19.200 "num_base_bdevs_operational": 2, 00:20:19.200 "base_bdevs_list": [ 00:20:19.200 { 00:20:19.200 "name": "spare", 00:20:19.200 "uuid": "4e6be7eb-c3ce-5cd9-b85d-030642ca3e20", 00:20:19.200 "is_configured": true, 00:20:19.200 "data_offset": 256, 00:20:19.200 "data_size": 7936 00:20:19.200 }, 00:20:19.200 { 00:20:19.200 "name": "BaseBdev2", 00:20:19.200 "uuid": "c793e93d-722c-558c-8431-6b4c5690aa78", 00:20:19.200 "is_configured": true, 00:20:19.200 "data_offset": 256, 00:20:19.200 "data_size": 7936 00:20:19.200 } 00:20:19.200 ] 00:20:19.200 }' 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:19.200 [2024-11-27 04:37:15.700532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.200 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:19.200 "name": "raid_bdev1", 00:20:19.201 "uuid": "b76bf0bc-58aa-4f1d-97e9-571fe9b7fc94", 00:20:19.201 "strip_size_kb": 0, 00:20:19.201 "state": "online", 00:20:19.201 "raid_level": "raid1", 00:20:19.201 "superblock": true, 00:20:19.201 "num_base_bdevs": 2, 00:20:19.201 "num_base_bdevs_discovered": 1, 00:20:19.201 "num_base_bdevs_operational": 1, 00:20:19.201 "base_bdevs_list": [ 00:20:19.201 { 00:20:19.201 "name": null, 00:20:19.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.201 "is_configured": false, 00:20:19.201 "data_offset": 0, 00:20:19.201 "data_size": 7936 00:20:19.201 }, 00:20:19.201 { 00:20:19.201 "name": "BaseBdev2", 00:20:19.201 "uuid": "c793e93d-722c-558c-8431-6b4c5690aa78", 00:20:19.201 "is_configured": true, 00:20:19.201 "data_offset": 256, 00:20:19.201 "data_size": 7936 00:20:19.201 } 00:20:19.201 ] 00:20:19.201 }' 00:20:19.201 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:19.201 04:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:19.769 04:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:19.769 04:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.769 04:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:19.769 [2024-11-27 04:37:16.171772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:19.769 [2024-11-27 04:37:16.172052] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:19.769 [2024-11-27 04:37:16.172075] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:19.769 [2024-11-27 04:37:16.172129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:19.769 [2024-11-27 04:37:16.186156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:20:19.769 04:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.769 04:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:19.769 [2024-11-27 04:37:16.188056] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:20.742 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.742 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:20.742 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:20.742 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:20.742 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:20.742 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.742 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.742 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.742 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:20.742 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.742 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:20.742 "name": "raid_bdev1", 00:20:20.742 "uuid": "b76bf0bc-58aa-4f1d-97e9-571fe9b7fc94", 00:20:20.742 "strip_size_kb": 0, 00:20:20.742 "state": "online", 00:20:20.742 "raid_level": "raid1", 00:20:20.742 "superblock": true, 00:20:20.742 "num_base_bdevs": 2, 00:20:20.742 "num_base_bdevs_discovered": 2, 00:20:20.742 "num_base_bdevs_operational": 2, 00:20:20.742 "process": { 00:20:20.742 "type": "rebuild", 00:20:20.742 "target": "spare", 00:20:20.742 "progress": { 00:20:20.742 "blocks": 2560, 00:20:20.742 "percent": 32 00:20:20.742 } 00:20:20.742 }, 00:20:20.742 "base_bdevs_list": [ 00:20:20.742 { 00:20:20.742 "name": "spare", 00:20:20.742 "uuid": "4e6be7eb-c3ce-5cd9-b85d-030642ca3e20", 00:20:20.742 "is_configured": true, 00:20:20.742 "data_offset": 256, 00:20:20.742 "data_size": 7936 00:20:20.742 }, 00:20:20.742 { 00:20:20.742 "name": "BaseBdev2", 00:20:20.742 "uuid": "c793e93d-722c-558c-8431-6b4c5690aa78", 00:20:20.742 "is_configured": true, 00:20:20.742 "data_offset": 256, 00:20:20.742 "data_size": 7936 00:20:20.742 } 00:20:20.742 ] 00:20:20.742 }' 00:20:20.742 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:20.742 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:20.742 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:20.742 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:20.742 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:20.742 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.742 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:21.003 [2024-11-27 04:37:17.332214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:21.003 [2024-11-27 04:37:17.393948] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:21.003 [2024-11-27 04:37:17.394097] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.003 [2024-11-27 04:37:17.394135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:21.003 [2024-11-27 04:37:17.394172] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:21.003 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.003 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:21.003 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:21.003 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:21.003 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:21.003 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:21.003 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:21.003 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.003 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.003 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.003 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.003 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.003 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.003 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.003 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:21.003 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.003 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.003 "name": "raid_bdev1", 00:20:21.003 "uuid": "b76bf0bc-58aa-4f1d-97e9-571fe9b7fc94", 00:20:21.003 "strip_size_kb": 0, 00:20:21.003 "state": "online", 00:20:21.003 "raid_level": "raid1", 00:20:21.003 "superblock": true, 00:20:21.003 "num_base_bdevs": 2, 00:20:21.003 "num_base_bdevs_discovered": 1, 00:20:21.003 "num_base_bdevs_operational": 1, 00:20:21.003 "base_bdevs_list": [ 00:20:21.003 { 00:20:21.003 "name": null, 00:20:21.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.003 "is_configured": false, 00:20:21.003 "data_offset": 0, 00:20:21.003 "data_size": 7936 00:20:21.003 }, 00:20:21.003 { 00:20:21.003 "name": "BaseBdev2", 00:20:21.003 "uuid": "c793e93d-722c-558c-8431-6b4c5690aa78", 00:20:21.003 "is_configured": true, 00:20:21.003 "data_offset": 256, 00:20:21.003 "data_size": 7936 00:20:21.003 } 00:20:21.003 ] 00:20:21.003 }' 00:20:21.003 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.003 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:21.572 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:21.572 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.572 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:21.572 [2024-11-27 04:37:17.874131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:21.572 [2024-11-27 04:37:17.874250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.572 [2024-11-27 04:37:17.874295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:21.572 [2024-11-27 04:37:17.874326] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.572 [2024-11-27 04:37:17.874645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.572 [2024-11-27 04:37:17.874707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:21.572 [2024-11-27 04:37:17.874812] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:21.572 [2024-11-27 04:37:17.874856] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:21.572 [2024-11-27 04:37:17.874903] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:21.572 [2024-11-27 04:37:17.874961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:21.572 [2024-11-27 04:37:17.889003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:20:21.572 spare 00:20:21.572 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.572 [2024-11-27 04:37:17.890996] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:21.572 04:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:22.509 04:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:22.509 04:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:22.509 04:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:22.509 04:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:22.509 04:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:22.509 04:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.509 04:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.509 04:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.509 04:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:22.509 04:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.509 04:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:22.509 "name": "raid_bdev1", 00:20:22.509 "uuid": "b76bf0bc-58aa-4f1d-97e9-571fe9b7fc94", 00:20:22.509 "strip_size_kb": 0, 00:20:22.509 "state": "online", 00:20:22.509 "raid_level": "raid1", 00:20:22.509 "superblock": true, 00:20:22.509 "num_base_bdevs": 2, 00:20:22.509 "num_base_bdevs_discovered": 2, 00:20:22.509 "num_base_bdevs_operational": 2, 00:20:22.509 "process": { 00:20:22.509 "type": "rebuild", 00:20:22.509 "target": "spare", 00:20:22.509 "progress": { 00:20:22.509 "blocks": 2560, 00:20:22.509 "percent": 32 00:20:22.509 } 00:20:22.509 }, 00:20:22.509 "base_bdevs_list": [ 00:20:22.509 { 00:20:22.509 "name": "spare", 00:20:22.509 "uuid": "4e6be7eb-c3ce-5cd9-b85d-030642ca3e20", 00:20:22.509 "is_configured": true, 00:20:22.509 "data_offset": 256, 00:20:22.509 "data_size": 7936 00:20:22.509 }, 00:20:22.509 { 00:20:22.509 "name": "BaseBdev2", 00:20:22.509 "uuid": "c793e93d-722c-558c-8431-6b4c5690aa78", 00:20:22.509 "is_configured": true, 00:20:22.509 "data_offset": 256, 00:20:22.509 "data_size": 7936 00:20:22.509 } 00:20:22.509 ] 00:20:22.509 }' 00:20:22.509 04:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:22.509 04:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:22.509 04:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:22.509 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:22.509 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:22.509 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.509 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:22.509 [2024-11-27 04:37:19.023436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:22.769 [2024-11-27 04:37:19.096964] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:22.769 [2024-11-27 04:37:19.097032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:22.769 [2024-11-27 04:37:19.097050] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:22.769 [2024-11-27 04:37:19.097057] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:22.769 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.769 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:22.769 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:22.769 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:22.769 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:22.769 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:22.769 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:22.769 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.769 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.769 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.769 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.769 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.769 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.769 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.769 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:22.769 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.769 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.769 "name": "raid_bdev1", 00:20:22.769 "uuid": "b76bf0bc-58aa-4f1d-97e9-571fe9b7fc94", 00:20:22.769 "strip_size_kb": 0, 00:20:22.769 "state": "online", 00:20:22.769 "raid_level": "raid1", 00:20:22.769 "superblock": true, 00:20:22.769 "num_base_bdevs": 2, 00:20:22.769 "num_base_bdevs_discovered": 1, 00:20:22.769 "num_base_bdevs_operational": 1, 00:20:22.769 "base_bdevs_list": [ 00:20:22.769 { 00:20:22.769 "name": null, 00:20:22.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.769 "is_configured": false, 00:20:22.769 "data_offset": 0, 00:20:22.769 "data_size": 7936 00:20:22.769 }, 00:20:22.769 { 00:20:22.769 "name": "BaseBdev2", 00:20:22.769 "uuid": "c793e93d-722c-558c-8431-6b4c5690aa78", 00:20:22.769 "is_configured": true, 00:20:22.769 "data_offset": 256, 00:20:22.769 "data_size": 7936 00:20:22.769 } 00:20:22.769 ] 00:20:22.769 }' 00:20:22.769 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.769 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:23.028 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:23.028 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:23.028 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:23.028 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:23.028 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:23.028 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.028 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.028 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:23.028 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.028 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.288 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:23.288 "name": "raid_bdev1", 00:20:23.288 "uuid": "b76bf0bc-58aa-4f1d-97e9-571fe9b7fc94", 00:20:23.288 "strip_size_kb": 0, 00:20:23.288 "state": "online", 00:20:23.288 "raid_level": "raid1", 00:20:23.288 "superblock": true, 00:20:23.288 "num_base_bdevs": 2, 00:20:23.288 "num_base_bdevs_discovered": 1, 00:20:23.288 "num_base_bdevs_operational": 1, 00:20:23.288 "base_bdevs_list": [ 00:20:23.288 { 00:20:23.288 "name": null, 00:20:23.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.288 "is_configured": false, 00:20:23.288 "data_offset": 0, 00:20:23.288 "data_size": 7936 00:20:23.288 }, 00:20:23.288 { 00:20:23.288 "name": "BaseBdev2", 00:20:23.288 "uuid": "c793e93d-722c-558c-8431-6b4c5690aa78", 00:20:23.288 "is_configured": true, 00:20:23.288 "data_offset": 256, 00:20:23.288 "data_size": 7936 00:20:23.288 } 00:20:23.288 ] 00:20:23.288 }' 00:20:23.288 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:23.288 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:23.288 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:23.288 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:23.288 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:23.288 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.288 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:23.288 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.288 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:23.288 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.288 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:23.288 [2024-11-27 04:37:19.740506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:23.288 [2024-11-27 04:37:19.740566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.288 [2024-11-27 04:37:19.740605] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:23.288 [2024-11-27 04:37:19.740616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.288 [2024-11-27 04:37:19.740882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.288 [2024-11-27 04:37:19.740898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:23.288 [2024-11-27 04:37:19.740953] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:23.288 [2024-11-27 04:37:19.740968] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:23.288 [2024-11-27 04:37:19.740982] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:23.288 [2024-11-27 04:37:19.740993] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:23.288 BaseBdev1 00:20:23.288 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.288 04:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:24.227 04:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:24.227 04:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:24.227 04:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.227 04:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:24.227 04:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:24.227 04:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:24.227 04:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.227 04:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.227 04:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.227 04:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.227 04:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.227 04:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.227 04:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.227 04:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:24.227 04:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.227 04:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.227 "name": "raid_bdev1", 00:20:24.227 "uuid": "b76bf0bc-58aa-4f1d-97e9-571fe9b7fc94", 00:20:24.228 "strip_size_kb": 0, 00:20:24.228 "state": "online", 00:20:24.228 "raid_level": "raid1", 00:20:24.228 "superblock": true, 00:20:24.228 "num_base_bdevs": 2, 00:20:24.228 "num_base_bdevs_discovered": 1, 00:20:24.228 "num_base_bdevs_operational": 1, 00:20:24.228 "base_bdevs_list": [ 00:20:24.228 { 00:20:24.228 "name": null, 00:20:24.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.228 "is_configured": false, 00:20:24.228 "data_offset": 0, 00:20:24.228 "data_size": 7936 00:20:24.228 }, 00:20:24.228 { 00:20:24.228 "name": "BaseBdev2", 00:20:24.228 "uuid": "c793e93d-722c-558c-8431-6b4c5690aa78", 00:20:24.228 "is_configured": true, 00:20:24.228 "data_offset": 256, 00:20:24.228 "data_size": 7936 00:20:24.228 } 00:20:24.228 ] 00:20:24.228 }' 00:20:24.228 04:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.228 04:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:24.798 "name": "raid_bdev1", 00:20:24.798 "uuid": "b76bf0bc-58aa-4f1d-97e9-571fe9b7fc94", 00:20:24.798 "strip_size_kb": 0, 00:20:24.798 "state": "online", 00:20:24.798 "raid_level": "raid1", 00:20:24.798 "superblock": true, 00:20:24.798 "num_base_bdevs": 2, 00:20:24.798 "num_base_bdevs_discovered": 1, 00:20:24.798 "num_base_bdevs_operational": 1, 00:20:24.798 "base_bdevs_list": [ 00:20:24.798 { 00:20:24.798 "name": null, 00:20:24.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.798 "is_configured": false, 00:20:24.798 "data_offset": 0, 00:20:24.798 "data_size": 7936 00:20:24.798 }, 00:20:24.798 { 00:20:24.798 "name": "BaseBdev2", 00:20:24.798 "uuid": "c793e93d-722c-558c-8431-6b4c5690aa78", 00:20:24.798 "is_configured": true, 00:20:24.798 "data_offset": 256, 00:20:24.798 "data_size": 7936 00:20:24.798 } 00:20:24.798 ] 00:20:24.798 }' 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:24.798 [2024-11-27 04:37:21.373861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:24.798 [2024-11-27 04:37:21.374035] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:24.798 [2024-11-27 04:37:21.374051] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:24.798 request: 00:20:24.798 { 00:20:24.798 "base_bdev": "BaseBdev1", 00:20:24.798 "raid_bdev": "raid_bdev1", 00:20:24.798 "method": "bdev_raid_add_base_bdev", 00:20:24.798 "req_id": 1 00:20:24.798 } 00:20:24.798 Got JSON-RPC error response 00:20:24.798 response: 00:20:24.798 { 00:20:24.798 "code": -22, 00:20:24.798 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:24.798 } 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:24.798 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:25.057 04:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:25.995 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:25.995 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:25.995 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:25.995 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:25.995 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:25.995 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:25.995 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.995 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.995 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.995 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.995 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.995 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.995 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:25.995 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.995 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.995 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.995 "name": "raid_bdev1", 00:20:25.995 "uuid": "b76bf0bc-58aa-4f1d-97e9-571fe9b7fc94", 00:20:25.995 "strip_size_kb": 0, 00:20:25.995 "state": "online", 00:20:25.995 "raid_level": "raid1", 00:20:25.995 "superblock": true, 00:20:25.995 "num_base_bdevs": 2, 00:20:25.995 "num_base_bdevs_discovered": 1, 00:20:25.995 "num_base_bdevs_operational": 1, 00:20:25.995 "base_bdevs_list": [ 00:20:25.995 { 00:20:25.995 "name": null, 00:20:25.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.995 "is_configured": false, 00:20:25.995 "data_offset": 0, 00:20:25.995 "data_size": 7936 00:20:25.995 }, 00:20:25.995 { 00:20:25.995 "name": "BaseBdev2", 00:20:25.995 "uuid": "c793e93d-722c-558c-8431-6b4c5690aa78", 00:20:25.995 "is_configured": true, 00:20:25.995 "data_offset": 256, 00:20:25.995 "data_size": 7936 00:20:25.995 } 00:20:25.995 ] 00:20:25.995 }' 00:20:25.995 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.995 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.255 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:26.255 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:26.255 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:26.255 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:26.255 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:26.255 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.255 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.255 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.255 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.255 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.255 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:26.255 "name": "raid_bdev1", 00:20:26.255 "uuid": "b76bf0bc-58aa-4f1d-97e9-571fe9b7fc94", 00:20:26.255 "strip_size_kb": 0, 00:20:26.255 "state": "online", 00:20:26.255 "raid_level": "raid1", 00:20:26.255 "superblock": true, 00:20:26.255 "num_base_bdevs": 2, 00:20:26.255 "num_base_bdevs_discovered": 1, 00:20:26.255 "num_base_bdevs_operational": 1, 00:20:26.255 "base_bdevs_list": [ 00:20:26.255 { 00:20:26.255 "name": null, 00:20:26.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.255 "is_configured": false, 00:20:26.255 "data_offset": 0, 00:20:26.255 "data_size": 7936 00:20:26.255 }, 00:20:26.255 { 00:20:26.255 "name": "BaseBdev2", 00:20:26.255 "uuid": "c793e93d-722c-558c-8431-6b4c5690aa78", 00:20:26.255 "is_configured": true, 00:20:26.255 "data_offset": 256, 00:20:26.255 "data_size": 7936 00:20:26.255 } 00:20:26.255 ] 00:20:26.255 }' 00:20:26.255 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:26.513 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:26.513 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:26.513 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:26.513 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88194 00:20:26.513 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88194 ']' 00:20:26.513 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88194 00:20:26.513 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:20:26.513 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:26.513 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88194 00:20:26.513 killing process with pid 88194 00:20:26.513 Received shutdown signal, test time was about 60.000000 seconds 00:20:26.513 00:20:26.513 Latency(us) 00:20:26.513 [2024-11-27T04:37:23.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.513 [2024-11-27T04:37:23.100Z] =================================================================================================================== 00:20:26.513 [2024-11-27T04:37:23.100Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:26.513 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:26.513 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:26.513 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88194' 00:20:26.513 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88194 00:20:26.513 [2024-11-27 04:37:22.969820] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:26.513 [2024-11-27 04:37:22.969955] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:26.513 04:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88194 00:20:26.513 [2024-11-27 04:37:22.970008] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:26.513 [2024-11-27 04:37:22.970022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:27.080 [2024-11-27 04:37:23.361266] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:28.459 04:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:20:28.459 00:20:28.459 real 0m20.633s 00:20:28.459 user 0m27.123s 00:20:28.459 sys 0m2.612s 00:20:28.459 04:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:28.459 04:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:28.459 ************************************ 00:20:28.459 END TEST raid_rebuild_test_sb_md_separate 00:20:28.459 ************************************ 00:20:28.459 04:37:24 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:20:28.459 04:37:24 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:20:28.459 04:37:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:28.459 04:37:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:28.459 04:37:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:28.459 ************************************ 00:20:28.459 START TEST raid_state_function_test_sb_md_interleaved 00:20:28.459 ************************************ 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88893 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88893' 00:20:28.459 Process raid pid: 88893 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88893 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88893 ']' 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.459 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:28.460 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.460 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:28.460 04:37:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.460 [2024-11-27 04:37:24.854434] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:28.460 [2024-11-27 04:37:24.854624] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.460 [2024-11-27 04:37:25.036528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.719 [2024-11-27 04:37:25.174800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.979 [2024-11-27 04:37:25.418746] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:28.979 [2024-11-27 04:37:25.418885] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:29.238 04:37:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.238 04:37:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:29.238 04:37:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:29.238 04:37:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.238 04:37:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.238 [2024-11-27 04:37:25.773983] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:29.238 [2024-11-27 04:37:25.774045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:29.238 [2024-11-27 04:37:25.774057] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:29.238 [2024-11-27 04:37:25.774068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:29.238 04:37:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.238 04:37:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:29.238 04:37:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:29.238 04:37:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:29.238 04:37:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:29.238 04:37:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:29.238 04:37:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:29.238 04:37:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.238 04:37:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.238 04:37:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.238 04:37:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.238 04:37:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.238 04:37:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.238 04:37:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.238 04:37:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.238 04:37:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.498 04:37:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:29.498 "name": "Existed_Raid", 00:20:29.498 "uuid": "9991bbdc-9382-4fac-a6c9-3e55dc333575", 00:20:29.498 "strip_size_kb": 0, 00:20:29.498 "state": "configuring", 00:20:29.498 "raid_level": "raid1", 00:20:29.498 "superblock": true, 00:20:29.498 "num_base_bdevs": 2, 00:20:29.498 "num_base_bdevs_discovered": 0, 00:20:29.498 "num_base_bdevs_operational": 2, 00:20:29.498 "base_bdevs_list": [ 00:20:29.498 { 00:20:29.498 "name": "BaseBdev1", 00:20:29.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.498 "is_configured": false, 00:20:29.498 "data_offset": 0, 00:20:29.498 "data_size": 0 00:20:29.498 }, 00:20:29.498 { 00:20:29.498 "name": "BaseBdev2", 00:20:29.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.498 "is_configured": false, 00:20:29.498 "data_offset": 0, 00:20:29.498 "data_size": 0 00:20:29.498 } 00:20:29.498 ] 00:20:29.498 }' 00:20:29.498 04:37:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:29.498 04:37:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.758 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:29.758 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.758 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.758 [2024-11-27 04:37:26.257086] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:29.758 [2024-11-27 04:37:26.257204] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:29.758 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.758 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:29.758 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.758 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.758 [2024-11-27 04:37:26.269062] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:29.758 [2024-11-27 04:37:26.269175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:29.758 [2024-11-27 04:37:26.269216] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:29.758 [2024-11-27 04:37:26.269257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:29.758 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.758 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:20:29.758 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.758 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.758 [2024-11-27 04:37:26.319333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:29.758 BaseBdev1 00:20:29.758 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.758 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:29.758 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:29.758 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:29.758 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:20:29.758 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:29.758 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:29.758 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:29.758 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.758 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.758 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.758 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:29.758 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.758 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:30.045 [ 00:20:30.045 { 00:20:30.045 "name": "BaseBdev1", 00:20:30.045 "aliases": [ 00:20:30.045 "fe103a1e-054b-4872-941f-261fd11bc2fb" 00:20:30.045 ], 00:20:30.045 "product_name": "Malloc disk", 00:20:30.045 "block_size": 4128, 00:20:30.045 "num_blocks": 8192, 00:20:30.045 "uuid": "fe103a1e-054b-4872-941f-261fd11bc2fb", 00:20:30.045 "md_size": 32, 00:20:30.045 "md_interleave": true, 00:20:30.045 "dif_type": 0, 00:20:30.045 "assigned_rate_limits": { 00:20:30.045 "rw_ios_per_sec": 0, 00:20:30.045 "rw_mbytes_per_sec": 0, 00:20:30.045 "r_mbytes_per_sec": 0, 00:20:30.045 "w_mbytes_per_sec": 0 00:20:30.045 }, 00:20:30.045 "claimed": true, 00:20:30.045 "claim_type": "exclusive_write", 00:20:30.045 "zoned": false, 00:20:30.045 "supported_io_types": { 00:20:30.045 "read": true, 00:20:30.045 "write": true, 00:20:30.045 "unmap": true, 00:20:30.045 "flush": true, 00:20:30.045 "reset": true, 00:20:30.045 "nvme_admin": false, 00:20:30.045 "nvme_io": false, 00:20:30.045 "nvme_io_md": false, 00:20:30.045 "write_zeroes": true, 00:20:30.045 "zcopy": true, 00:20:30.045 "get_zone_info": false, 00:20:30.045 "zone_management": false, 00:20:30.045 "zone_append": false, 00:20:30.045 "compare": false, 00:20:30.045 "compare_and_write": false, 00:20:30.045 "abort": true, 00:20:30.045 "seek_hole": false, 00:20:30.045 "seek_data": false, 00:20:30.045 "copy": true, 00:20:30.045 "nvme_iov_md": false 00:20:30.045 }, 00:20:30.045 "memory_domains": [ 00:20:30.045 { 00:20:30.045 "dma_device_id": "system", 00:20:30.045 "dma_device_type": 1 00:20:30.045 }, 00:20:30.045 { 00:20:30.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.045 "dma_device_type": 2 00:20:30.045 } 00:20:30.045 ], 00:20:30.045 "driver_specific": {} 00:20:30.045 } 00:20:30.045 ] 00:20:30.045 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.045 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:20:30.045 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:30.045 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:30.045 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:30.045 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:30.045 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:30.045 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:30.045 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.045 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.045 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.045 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.045 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.045 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.045 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.045 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:30.045 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.045 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.045 "name": "Existed_Raid", 00:20:30.045 "uuid": "e63a5e20-0d58-4b01-82ab-e03a979b3c67", 00:20:30.045 "strip_size_kb": 0, 00:20:30.045 "state": "configuring", 00:20:30.045 "raid_level": "raid1", 00:20:30.045 "superblock": true, 00:20:30.045 "num_base_bdevs": 2, 00:20:30.045 "num_base_bdevs_discovered": 1, 00:20:30.045 "num_base_bdevs_operational": 2, 00:20:30.045 "base_bdevs_list": [ 00:20:30.045 { 00:20:30.045 "name": "BaseBdev1", 00:20:30.045 "uuid": "fe103a1e-054b-4872-941f-261fd11bc2fb", 00:20:30.045 "is_configured": true, 00:20:30.045 "data_offset": 256, 00:20:30.045 "data_size": 7936 00:20:30.045 }, 00:20:30.045 { 00:20:30.045 "name": "BaseBdev2", 00:20:30.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.045 "is_configured": false, 00:20:30.045 "data_offset": 0, 00:20:30.045 "data_size": 0 00:20:30.045 } 00:20:30.045 ] 00:20:30.045 }' 00:20:30.045 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.045 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:30.310 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:30.310 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.310 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:30.310 [2024-11-27 04:37:26.870528] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:30.310 [2024-11-27 04:37:26.870667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:30.310 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.310 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:30.310 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.310 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:30.310 [2024-11-27 04:37:26.882618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:30.310 [2024-11-27 04:37:26.884800] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:30.310 [2024-11-27 04:37:26.884920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:30.310 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.310 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:30.310 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:30.310 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:30.310 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:30.310 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:30.310 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:30.310 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:30.310 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:30.310 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.310 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.310 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.310 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.570 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.570 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.570 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:30.570 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.570 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.570 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.570 "name": "Existed_Raid", 00:20:30.570 "uuid": "eea3b541-1b3c-4d85-bfb9-53d2b548971c", 00:20:30.570 "strip_size_kb": 0, 00:20:30.570 "state": "configuring", 00:20:30.570 "raid_level": "raid1", 00:20:30.570 "superblock": true, 00:20:30.570 "num_base_bdevs": 2, 00:20:30.570 "num_base_bdevs_discovered": 1, 00:20:30.570 "num_base_bdevs_operational": 2, 00:20:30.570 "base_bdevs_list": [ 00:20:30.570 { 00:20:30.570 "name": "BaseBdev1", 00:20:30.570 "uuid": "fe103a1e-054b-4872-941f-261fd11bc2fb", 00:20:30.570 "is_configured": true, 00:20:30.570 "data_offset": 256, 00:20:30.570 "data_size": 7936 00:20:30.570 }, 00:20:30.570 { 00:20:30.570 "name": "BaseBdev2", 00:20:30.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.570 "is_configured": false, 00:20:30.570 "data_offset": 0, 00:20:30.570 "data_size": 0 00:20:30.570 } 00:20:30.570 ] 00:20:30.570 }' 00:20:30.570 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.570 04:37:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:30.829 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:20:30.829 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.829 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:31.094 [2024-11-27 04:37:27.415367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:31.094 [2024-11-27 04:37:27.415753] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:31.094 [2024-11-27 04:37:27.415818] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:31.094 [2024-11-27 04:37:27.415945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:31.094 [2024-11-27 04:37:27.416072] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:31.094 [2024-11-27 04:37:27.416152] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:31.094 [2024-11-27 04:37:27.416265] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:31.094 BaseBdev2 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:31.095 [ 00:20:31.095 { 00:20:31.095 "name": "BaseBdev2", 00:20:31.095 "aliases": [ 00:20:31.095 "1e4019da-2aa0-4393-b4bb-3aceed0d7759" 00:20:31.095 ], 00:20:31.095 "product_name": "Malloc disk", 00:20:31.095 "block_size": 4128, 00:20:31.095 "num_blocks": 8192, 00:20:31.095 "uuid": "1e4019da-2aa0-4393-b4bb-3aceed0d7759", 00:20:31.095 "md_size": 32, 00:20:31.095 "md_interleave": true, 00:20:31.095 "dif_type": 0, 00:20:31.095 "assigned_rate_limits": { 00:20:31.095 "rw_ios_per_sec": 0, 00:20:31.095 "rw_mbytes_per_sec": 0, 00:20:31.095 "r_mbytes_per_sec": 0, 00:20:31.095 "w_mbytes_per_sec": 0 00:20:31.095 }, 00:20:31.095 "claimed": true, 00:20:31.095 "claim_type": "exclusive_write", 00:20:31.095 "zoned": false, 00:20:31.095 "supported_io_types": { 00:20:31.095 "read": true, 00:20:31.095 "write": true, 00:20:31.095 "unmap": true, 00:20:31.095 "flush": true, 00:20:31.095 "reset": true, 00:20:31.095 "nvme_admin": false, 00:20:31.095 "nvme_io": false, 00:20:31.095 "nvme_io_md": false, 00:20:31.095 "write_zeroes": true, 00:20:31.095 "zcopy": true, 00:20:31.095 "get_zone_info": false, 00:20:31.095 "zone_management": false, 00:20:31.095 "zone_append": false, 00:20:31.095 "compare": false, 00:20:31.095 "compare_and_write": false, 00:20:31.095 "abort": true, 00:20:31.095 "seek_hole": false, 00:20:31.095 "seek_data": false, 00:20:31.095 "copy": true, 00:20:31.095 "nvme_iov_md": false 00:20:31.095 }, 00:20:31.095 "memory_domains": [ 00:20:31.095 { 00:20:31.095 "dma_device_id": "system", 00:20:31.095 "dma_device_type": 1 00:20:31.095 }, 00:20:31.095 { 00:20:31.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.095 "dma_device_type": 2 00:20:31.095 } 00:20:31.095 ], 00:20:31.095 "driver_specific": {} 00:20:31.095 } 00:20:31.095 ] 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:31.095 "name": "Existed_Raid", 00:20:31.095 "uuid": "eea3b541-1b3c-4d85-bfb9-53d2b548971c", 00:20:31.095 "strip_size_kb": 0, 00:20:31.095 "state": "online", 00:20:31.095 "raid_level": "raid1", 00:20:31.095 "superblock": true, 00:20:31.095 "num_base_bdevs": 2, 00:20:31.095 "num_base_bdevs_discovered": 2, 00:20:31.095 "num_base_bdevs_operational": 2, 00:20:31.095 "base_bdevs_list": [ 00:20:31.095 { 00:20:31.095 "name": "BaseBdev1", 00:20:31.095 "uuid": "fe103a1e-054b-4872-941f-261fd11bc2fb", 00:20:31.095 "is_configured": true, 00:20:31.095 "data_offset": 256, 00:20:31.095 "data_size": 7936 00:20:31.095 }, 00:20:31.095 { 00:20:31.095 "name": "BaseBdev2", 00:20:31.095 "uuid": "1e4019da-2aa0-4393-b4bb-3aceed0d7759", 00:20:31.095 "is_configured": true, 00:20:31.095 "data_offset": 256, 00:20:31.095 "data_size": 7936 00:20:31.095 } 00:20:31.095 ] 00:20:31.095 }' 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:31.095 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:31.663 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:31.663 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:31.663 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:31.663 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:31.663 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:31.663 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:31.663 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:31.663 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:31.663 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.663 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:31.663 [2024-11-27 04:37:27.950914] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:31.663 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.663 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:31.663 "name": "Existed_Raid", 00:20:31.663 "aliases": [ 00:20:31.663 "eea3b541-1b3c-4d85-bfb9-53d2b548971c" 00:20:31.663 ], 00:20:31.663 "product_name": "Raid Volume", 00:20:31.663 "block_size": 4128, 00:20:31.663 "num_blocks": 7936, 00:20:31.663 "uuid": "eea3b541-1b3c-4d85-bfb9-53d2b548971c", 00:20:31.663 "md_size": 32, 00:20:31.663 "md_interleave": true, 00:20:31.663 "dif_type": 0, 00:20:31.663 "assigned_rate_limits": { 00:20:31.663 "rw_ios_per_sec": 0, 00:20:31.663 "rw_mbytes_per_sec": 0, 00:20:31.663 "r_mbytes_per_sec": 0, 00:20:31.663 "w_mbytes_per_sec": 0 00:20:31.663 }, 00:20:31.663 "claimed": false, 00:20:31.663 "zoned": false, 00:20:31.663 "supported_io_types": { 00:20:31.663 "read": true, 00:20:31.663 "write": true, 00:20:31.663 "unmap": false, 00:20:31.663 "flush": false, 00:20:31.663 "reset": true, 00:20:31.663 "nvme_admin": false, 00:20:31.663 "nvme_io": false, 00:20:31.663 "nvme_io_md": false, 00:20:31.663 "write_zeroes": true, 00:20:31.663 "zcopy": false, 00:20:31.663 "get_zone_info": false, 00:20:31.663 "zone_management": false, 00:20:31.663 "zone_append": false, 00:20:31.663 "compare": false, 00:20:31.663 "compare_and_write": false, 00:20:31.663 "abort": false, 00:20:31.663 "seek_hole": false, 00:20:31.663 "seek_data": false, 00:20:31.663 "copy": false, 00:20:31.663 "nvme_iov_md": false 00:20:31.663 }, 00:20:31.663 "memory_domains": [ 00:20:31.663 { 00:20:31.663 "dma_device_id": "system", 00:20:31.663 "dma_device_type": 1 00:20:31.663 }, 00:20:31.663 { 00:20:31.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.663 "dma_device_type": 2 00:20:31.663 }, 00:20:31.663 { 00:20:31.663 "dma_device_id": "system", 00:20:31.663 "dma_device_type": 1 00:20:31.663 }, 00:20:31.663 { 00:20:31.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.663 "dma_device_type": 2 00:20:31.663 } 00:20:31.663 ], 00:20:31.663 "driver_specific": { 00:20:31.663 "raid": { 00:20:31.663 "uuid": "eea3b541-1b3c-4d85-bfb9-53d2b548971c", 00:20:31.663 "strip_size_kb": 0, 00:20:31.664 "state": "online", 00:20:31.664 "raid_level": "raid1", 00:20:31.664 "superblock": true, 00:20:31.664 "num_base_bdevs": 2, 00:20:31.664 "num_base_bdevs_discovered": 2, 00:20:31.664 "num_base_bdevs_operational": 2, 00:20:31.664 "base_bdevs_list": [ 00:20:31.664 { 00:20:31.664 "name": "BaseBdev1", 00:20:31.664 "uuid": "fe103a1e-054b-4872-941f-261fd11bc2fb", 00:20:31.664 "is_configured": true, 00:20:31.664 "data_offset": 256, 00:20:31.664 "data_size": 7936 00:20:31.664 }, 00:20:31.664 { 00:20:31.664 "name": "BaseBdev2", 00:20:31.664 "uuid": "1e4019da-2aa0-4393-b4bb-3aceed0d7759", 00:20:31.664 "is_configured": true, 00:20:31.664 "data_offset": 256, 00:20:31.664 "data_size": 7936 00:20:31.664 } 00:20:31.664 ] 00:20:31.664 } 00:20:31.664 } 00:20:31.664 }' 00:20:31.664 04:37:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:31.664 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:31.664 BaseBdev2' 00:20:31.664 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:31.664 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:31.664 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:31.664 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:31.664 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.664 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:31.664 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:31.664 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.664 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:31.664 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:31.664 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:31.664 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:31.664 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:31.664 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.664 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:31.664 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.664 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:31.664 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:31.664 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:31.664 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.664 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:31.664 [2024-11-27 04:37:28.186294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:31.922 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.922 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:31.922 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:31.922 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:31.922 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:31.922 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:31.922 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:31.922 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:31.922 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:31.922 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:31.922 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:31.922 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:31.922 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:31.922 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:31.922 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:31.922 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:31.922 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.922 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:31.922 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.922 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:31.922 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.922 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:31.922 "name": "Existed_Raid", 00:20:31.922 "uuid": "eea3b541-1b3c-4d85-bfb9-53d2b548971c", 00:20:31.922 "strip_size_kb": 0, 00:20:31.922 "state": "online", 00:20:31.922 "raid_level": "raid1", 00:20:31.922 "superblock": true, 00:20:31.922 "num_base_bdevs": 2, 00:20:31.922 "num_base_bdevs_discovered": 1, 00:20:31.922 "num_base_bdevs_operational": 1, 00:20:31.922 "base_bdevs_list": [ 00:20:31.922 { 00:20:31.922 "name": null, 00:20:31.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.922 "is_configured": false, 00:20:31.922 "data_offset": 0, 00:20:31.922 "data_size": 7936 00:20:31.922 }, 00:20:31.922 { 00:20:31.922 "name": "BaseBdev2", 00:20:31.922 "uuid": "1e4019da-2aa0-4393-b4bb-3aceed0d7759", 00:20:31.922 "is_configured": true, 00:20:31.922 "data_offset": 256, 00:20:31.922 "data_size": 7936 00:20:31.922 } 00:20:31.922 ] 00:20:31.922 }' 00:20:31.922 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:31.922 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:32.181 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:32.181 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:32.181 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:32.181 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.181 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.181 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:32.181 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:32.440 [2024-11-27 04:37:28.776672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:32.440 [2024-11-27 04:37:28.776858] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:32.440 [2024-11-27 04:37:28.890645] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:32.440 [2024-11-27 04:37:28.890802] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:32.440 [2024-11-27 04:37:28.890855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88893 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88893 ']' 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88893 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88893 00:20:32.440 killing process with pid 88893 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88893' 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88893 00:20:32.440 [2024-11-27 04:37:28.974568] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:32.440 04:37:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88893 00:20:32.440 [2024-11-27 04:37:28.993560] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:33.816 04:37:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:20:33.816 00:20:33.816 real 0m5.474s 00:20:33.816 user 0m7.901s 00:20:33.816 sys 0m0.913s 00:20:33.816 04:37:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:33.816 04:37:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:33.816 ************************************ 00:20:33.816 END TEST raid_state_function_test_sb_md_interleaved 00:20:33.816 ************************************ 00:20:33.816 04:37:30 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:20:33.816 04:37:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:33.816 04:37:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:33.816 04:37:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:33.816 ************************************ 00:20:33.816 START TEST raid_superblock_test_md_interleaved 00:20:33.816 ************************************ 00:20:33.816 04:37:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:20:33.816 04:37:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:33.816 04:37:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:33.816 04:37:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:33.816 04:37:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:33.816 04:37:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:33.816 04:37:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:33.816 04:37:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:33.816 04:37:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:33.816 04:37:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:33.816 04:37:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:33.816 04:37:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:33.816 04:37:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:33.816 04:37:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:33.816 04:37:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:33.816 04:37:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:33.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.816 04:37:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89145 00:20:33.816 04:37:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89145 00:20:33.816 04:37:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:33.816 04:37:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89145 ']' 00:20:33.816 04:37:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.816 04:37:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.816 04:37:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.816 04:37:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.816 04:37:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:33.816 [2024-11-27 04:37:30.387234] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:33.816 [2024-11-27 04:37:30.387442] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89145 ] 00:20:34.074 [2024-11-27 04:37:30.568588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.331 [2024-11-27 04:37:30.697555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.331 [2024-11-27 04:37:30.916251] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:34.331 [2024-11-27 04:37:30.916362] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:34.926 malloc1 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:34.926 [2024-11-27 04:37:31.337068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:34.926 [2024-11-27 04:37:31.337272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.926 [2024-11-27 04:37:31.337311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:34.926 [2024-11-27 04:37:31.337323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.926 [2024-11-27 04:37:31.339571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.926 [2024-11-27 04:37:31.339611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:34.926 pt1 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:34.926 malloc2 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:34.926 [2024-11-27 04:37:31.395952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:34.926 [2024-11-27 04:37:31.396078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.926 [2024-11-27 04:37:31.396135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:34.926 [2024-11-27 04:37:31.396190] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.926 [2024-11-27 04:37:31.398285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.926 [2024-11-27 04:37:31.398360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:34.926 pt2 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:34.926 [2024-11-27 04:37:31.407940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:34.926 [2024-11-27 04:37:31.409902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:34.926 [2024-11-27 04:37:31.410160] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:34.926 [2024-11-27 04:37:31.410217] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:34.926 [2024-11-27 04:37:31.410330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:34.926 [2024-11-27 04:37:31.410446] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:34.926 [2024-11-27 04:37:31.410493] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:34.926 [2024-11-27 04:37:31.410614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.926 "name": "raid_bdev1", 00:20:34.926 "uuid": "32a77317-55a6-405a-a60c-e559482f63d1", 00:20:34.926 "strip_size_kb": 0, 00:20:34.926 "state": "online", 00:20:34.926 "raid_level": "raid1", 00:20:34.926 "superblock": true, 00:20:34.926 "num_base_bdevs": 2, 00:20:34.926 "num_base_bdevs_discovered": 2, 00:20:34.926 "num_base_bdevs_operational": 2, 00:20:34.926 "base_bdevs_list": [ 00:20:34.926 { 00:20:34.926 "name": "pt1", 00:20:34.926 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:34.926 "is_configured": true, 00:20:34.926 "data_offset": 256, 00:20:34.926 "data_size": 7936 00:20:34.926 }, 00:20:34.926 { 00:20:34.926 "name": "pt2", 00:20:34.926 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:34.926 "is_configured": true, 00:20:34.926 "data_offset": 256, 00:20:34.926 "data_size": 7936 00:20:34.926 } 00:20:34.926 ] 00:20:34.926 }' 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.926 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.496 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:35.496 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:35.496 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:35.496 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:35.496 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:35.496 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:35.496 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:35.496 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:35.496 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.496 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.496 [2024-11-27 04:37:31.899491] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:35.496 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.496 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:35.496 "name": "raid_bdev1", 00:20:35.496 "aliases": [ 00:20:35.496 "32a77317-55a6-405a-a60c-e559482f63d1" 00:20:35.496 ], 00:20:35.496 "product_name": "Raid Volume", 00:20:35.496 "block_size": 4128, 00:20:35.496 "num_blocks": 7936, 00:20:35.496 "uuid": "32a77317-55a6-405a-a60c-e559482f63d1", 00:20:35.496 "md_size": 32, 00:20:35.496 "md_interleave": true, 00:20:35.496 "dif_type": 0, 00:20:35.496 "assigned_rate_limits": { 00:20:35.496 "rw_ios_per_sec": 0, 00:20:35.496 "rw_mbytes_per_sec": 0, 00:20:35.496 "r_mbytes_per_sec": 0, 00:20:35.496 "w_mbytes_per_sec": 0 00:20:35.496 }, 00:20:35.496 "claimed": false, 00:20:35.496 "zoned": false, 00:20:35.496 "supported_io_types": { 00:20:35.496 "read": true, 00:20:35.496 "write": true, 00:20:35.496 "unmap": false, 00:20:35.496 "flush": false, 00:20:35.496 "reset": true, 00:20:35.496 "nvme_admin": false, 00:20:35.496 "nvme_io": false, 00:20:35.496 "nvme_io_md": false, 00:20:35.496 "write_zeroes": true, 00:20:35.496 "zcopy": false, 00:20:35.496 "get_zone_info": false, 00:20:35.496 "zone_management": false, 00:20:35.496 "zone_append": false, 00:20:35.496 "compare": false, 00:20:35.496 "compare_and_write": false, 00:20:35.496 "abort": false, 00:20:35.496 "seek_hole": false, 00:20:35.496 "seek_data": false, 00:20:35.496 "copy": false, 00:20:35.496 "nvme_iov_md": false 00:20:35.496 }, 00:20:35.496 "memory_domains": [ 00:20:35.496 { 00:20:35.496 "dma_device_id": "system", 00:20:35.496 "dma_device_type": 1 00:20:35.496 }, 00:20:35.496 { 00:20:35.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.496 "dma_device_type": 2 00:20:35.496 }, 00:20:35.496 { 00:20:35.496 "dma_device_id": "system", 00:20:35.496 "dma_device_type": 1 00:20:35.496 }, 00:20:35.496 { 00:20:35.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.496 "dma_device_type": 2 00:20:35.496 } 00:20:35.496 ], 00:20:35.496 "driver_specific": { 00:20:35.496 "raid": { 00:20:35.496 "uuid": "32a77317-55a6-405a-a60c-e559482f63d1", 00:20:35.496 "strip_size_kb": 0, 00:20:35.496 "state": "online", 00:20:35.496 "raid_level": "raid1", 00:20:35.496 "superblock": true, 00:20:35.496 "num_base_bdevs": 2, 00:20:35.496 "num_base_bdevs_discovered": 2, 00:20:35.496 "num_base_bdevs_operational": 2, 00:20:35.496 "base_bdevs_list": [ 00:20:35.496 { 00:20:35.496 "name": "pt1", 00:20:35.496 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:35.496 "is_configured": true, 00:20:35.496 "data_offset": 256, 00:20:35.496 "data_size": 7936 00:20:35.496 }, 00:20:35.496 { 00:20:35.496 "name": "pt2", 00:20:35.496 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:35.496 "is_configured": true, 00:20:35.496 "data_offset": 256, 00:20:35.496 "data_size": 7936 00:20:35.497 } 00:20:35.497 ] 00:20:35.497 } 00:20:35.497 } 00:20:35.497 }' 00:20:35.497 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:35.497 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:35.497 pt2' 00:20:35.497 04:37:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:35.497 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:35.497 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:35.497 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:35.497 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.497 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.497 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:35.497 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.758 [2024-11-27 04:37:32.150993] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=32a77317-55a6-405a-a60c-e559482f63d1 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 32a77317-55a6-405a-a60c-e559482f63d1 ']' 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.758 [2024-11-27 04:37:32.198603] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:35.758 [2024-11-27 04:37:32.198669] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:35.758 [2024-11-27 04:37:32.198786] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:35.758 [2024-11-27 04:37:32.198861] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:35.758 [2024-11-27 04:37:32.198914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.758 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.758 [2024-11-27 04:37:32.338412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:35.758 [2024-11-27 04:37:32.340563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:35.758 [2024-11-27 04:37:32.340722] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:35.759 [2024-11-27 04:37:32.340883] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:35.759 [2024-11-27 04:37:32.340955] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:35.759 [2024-11-27 04:37:32.341027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:36.018 request: 00:20:36.019 { 00:20:36.019 "name": "raid_bdev1", 00:20:36.019 "raid_level": "raid1", 00:20:36.019 "base_bdevs": [ 00:20:36.019 "malloc1", 00:20:36.019 "malloc2" 00:20:36.019 ], 00:20:36.019 "superblock": false, 00:20:36.019 "method": "bdev_raid_create", 00:20:36.019 "req_id": 1 00:20:36.019 } 00:20:36.019 Got JSON-RPC error response 00:20:36.019 response: 00:20:36.019 { 00:20:36.019 "code": -17, 00:20:36.019 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:36.019 } 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:36.019 [2024-11-27 04:37:32.398279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:36.019 [2024-11-27 04:37:32.398417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:36.019 [2024-11-27 04:37:32.398458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:36.019 [2024-11-27 04:37:32.398497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:36.019 [2024-11-27 04:37:32.400694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:36.019 [2024-11-27 04:37:32.400780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:36.019 [2024-11-27 04:37:32.400870] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:36.019 [2024-11-27 04:37:32.400964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:36.019 pt1 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.019 "name": "raid_bdev1", 00:20:36.019 "uuid": "32a77317-55a6-405a-a60c-e559482f63d1", 00:20:36.019 "strip_size_kb": 0, 00:20:36.019 "state": "configuring", 00:20:36.019 "raid_level": "raid1", 00:20:36.019 "superblock": true, 00:20:36.019 "num_base_bdevs": 2, 00:20:36.019 "num_base_bdevs_discovered": 1, 00:20:36.019 "num_base_bdevs_operational": 2, 00:20:36.019 "base_bdevs_list": [ 00:20:36.019 { 00:20:36.019 "name": "pt1", 00:20:36.019 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:36.019 "is_configured": true, 00:20:36.019 "data_offset": 256, 00:20:36.019 "data_size": 7936 00:20:36.019 }, 00:20:36.019 { 00:20:36.019 "name": null, 00:20:36.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:36.019 "is_configured": false, 00:20:36.019 "data_offset": 256, 00:20:36.019 "data_size": 7936 00:20:36.019 } 00:20:36.019 ] 00:20:36.019 }' 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.019 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:36.279 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:36.279 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:36.279 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:36.279 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:36.279 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.279 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:36.539 [2024-11-27 04:37:32.865473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:36.539 [2024-11-27 04:37:32.865561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:36.539 [2024-11-27 04:37:32.865585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:36.539 [2024-11-27 04:37:32.865596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:36.539 [2024-11-27 04:37:32.865779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:36.539 [2024-11-27 04:37:32.865796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:36.539 [2024-11-27 04:37:32.865851] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:36.539 [2024-11-27 04:37:32.865874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:36.539 [2024-11-27 04:37:32.865959] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:36.539 [2024-11-27 04:37:32.865970] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:36.539 [2024-11-27 04:37:32.866060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:36.539 [2024-11-27 04:37:32.866152] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:36.539 [2024-11-27 04:37:32.866162] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:36.539 [2024-11-27 04:37:32.866236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:36.539 pt2 00:20:36.539 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.539 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:36.539 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:36.539 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:36.539 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:36.539 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:36.539 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:36.539 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:36.539 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:36.539 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.539 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.539 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.539 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.539 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.539 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.539 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:36.539 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.539 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.539 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.539 "name": "raid_bdev1", 00:20:36.539 "uuid": "32a77317-55a6-405a-a60c-e559482f63d1", 00:20:36.539 "strip_size_kb": 0, 00:20:36.539 "state": "online", 00:20:36.539 "raid_level": "raid1", 00:20:36.539 "superblock": true, 00:20:36.539 "num_base_bdevs": 2, 00:20:36.539 "num_base_bdevs_discovered": 2, 00:20:36.539 "num_base_bdevs_operational": 2, 00:20:36.539 "base_bdevs_list": [ 00:20:36.539 { 00:20:36.539 "name": "pt1", 00:20:36.539 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:36.539 "is_configured": true, 00:20:36.539 "data_offset": 256, 00:20:36.539 "data_size": 7936 00:20:36.539 }, 00:20:36.539 { 00:20:36.539 "name": "pt2", 00:20:36.539 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:36.539 "is_configured": true, 00:20:36.539 "data_offset": 256, 00:20:36.539 "data_size": 7936 00:20:36.539 } 00:20:36.539 ] 00:20:36.539 }' 00:20:36.539 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.539 04:37:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:36.799 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:36.799 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:36.799 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:36.799 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:36.799 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:36.799 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:36.799 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:36.799 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:36.799 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.799 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:36.799 [2024-11-27 04:37:33.353021] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:36.799 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:37.059 "name": "raid_bdev1", 00:20:37.059 "aliases": [ 00:20:37.059 "32a77317-55a6-405a-a60c-e559482f63d1" 00:20:37.059 ], 00:20:37.059 "product_name": "Raid Volume", 00:20:37.059 "block_size": 4128, 00:20:37.059 "num_blocks": 7936, 00:20:37.059 "uuid": "32a77317-55a6-405a-a60c-e559482f63d1", 00:20:37.059 "md_size": 32, 00:20:37.059 "md_interleave": true, 00:20:37.059 "dif_type": 0, 00:20:37.059 "assigned_rate_limits": { 00:20:37.059 "rw_ios_per_sec": 0, 00:20:37.059 "rw_mbytes_per_sec": 0, 00:20:37.059 "r_mbytes_per_sec": 0, 00:20:37.059 "w_mbytes_per_sec": 0 00:20:37.059 }, 00:20:37.059 "claimed": false, 00:20:37.059 "zoned": false, 00:20:37.059 "supported_io_types": { 00:20:37.059 "read": true, 00:20:37.059 "write": true, 00:20:37.059 "unmap": false, 00:20:37.059 "flush": false, 00:20:37.059 "reset": true, 00:20:37.059 "nvme_admin": false, 00:20:37.059 "nvme_io": false, 00:20:37.059 "nvme_io_md": false, 00:20:37.059 "write_zeroes": true, 00:20:37.059 "zcopy": false, 00:20:37.059 "get_zone_info": false, 00:20:37.059 "zone_management": false, 00:20:37.059 "zone_append": false, 00:20:37.059 "compare": false, 00:20:37.059 "compare_and_write": false, 00:20:37.059 "abort": false, 00:20:37.059 "seek_hole": false, 00:20:37.059 "seek_data": false, 00:20:37.059 "copy": false, 00:20:37.059 "nvme_iov_md": false 00:20:37.059 }, 00:20:37.059 "memory_domains": [ 00:20:37.059 { 00:20:37.059 "dma_device_id": "system", 00:20:37.059 "dma_device_type": 1 00:20:37.059 }, 00:20:37.059 { 00:20:37.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.059 "dma_device_type": 2 00:20:37.059 }, 00:20:37.059 { 00:20:37.059 "dma_device_id": "system", 00:20:37.059 "dma_device_type": 1 00:20:37.059 }, 00:20:37.059 { 00:20:37.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.059 "dma_device_type": 2 00:20:37.059 } 00:20:37.059 ], 00:20:37.059 "driver_specific": { 00:20:37.059 "raid": { 00:20:37.059 "uuid": "32a77317-55a6-405a-a60c-e559482f63d1", 00:20:37.059 "strip_size_kb": 0, 00:20:37.059 "state": "online", 00:20:37.059 "raid_level": "raid1", 00:20:37.059 "superblock": true, 00:20:37.059 "num_base_bdevs": 2, 00:20:37.059 "num_base_bdevs_discovered": 2, 00:20:37.059 "num_base_bdevs_operational": 2, 00:20:37.059 "base_bdevs_list": [ 00:20:37.059 { 00:20:37.059 "name": "pt1", 00:20:37.059 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:37.059 "is_configured": true, 00:20:37.059 "data_offset": 256, 00:20:37.059 "data_size": 7936 00:20:37.059 }, 00:20:37.059 { 00:20:37.059 "name": "pt2", 00:20:37.059 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:37.059 "is_configured": true, 00:20:37.059 "data_offset": 256, 00:20:37.059 "data_size": 7936 00:20:37.059 } 00:20:37.059 ] 00:20:37.059 } 00:20:37.059 } 00:20:37.059 }' 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:37.059 pt2' 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.059 [2024-11-27 04:37:33.564644] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 32a77317-55a6-405a-a60c-e559482f63d1 '!=' 32a77317-55a6-405a-a60c-e559482f63d1 ']' 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.059 [2024-11-27 04:37:33.608300] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.059 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.060 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.060 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.060 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.060 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.060 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.319 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.319 "name": "raid_bdev1", 00:20:37.319 "uuid": "32a77317-55a6-405a-a60c-e559482f63d1", 00:20:37.319 "strip_size_kb": 0, 00:20:37.319 "state": "online", 00:20:37.319 "raid_level": "raid1", 00:20:37.319 "superblock": true, 00:20:37.319 "num_base_bdevs": 2, 00:20:37.319 "num_base_bdevs_discovered": 1, 00:20:37.319 "num_base_bdevs_operational": 1, 00:20:37.319 "base_bdevs_list": [ 00:20:37.319 { 00:20:37.319 "name": null, 00:20:37.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.319 "is_configured": false, 00:20:37.319 "data_offset": 0, 00:20:37.319 "data_size": 7936 00:20:37.319 }, 00:20:37.319 { 00:20:37.319 "name": "pt2", 00:20:37.319 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:37.319 "is_configured": true, 00:20:37.319 "data_offset": 256, 00:20:37.319 "data_size": 7936 00:20:37.319 } 00:20:37.319 ] 00:20:37.319 }' 00:20:37.319 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.319 04:37:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.578 [2024-11-27 04:37:34.075491] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:37.578 [2024-11-27 04:37:34.075589] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:37.578 [2024-11-27 04:37:34.075727] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:37.578 [2024-11-27 04:37:34.075830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:37.578 [2024-11-27 04:37:34.075896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.578 [2024-11-27 04:37:34.135399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:37.578 [2024-11-27 04:37:34.135553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.578 [2024-11-27 04:37:34.135619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:37.578 [2024-11-27 04:37:34.135639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.578 [2024-11-27 04:37:34.137992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.578 [2024-11-27 04:37:34.138101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:37.578 [2024-11-27 04:37:34.138186] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:37.578 [2024-11-27 04:37:34.138260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:37.578 [2024-11-27 04:37:34.138343] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:37.578 [2024-11-27 04:37:34.138358] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:37.578 [2024-11-27 04:37:34.138478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:37.578 [2024-11-27 04:37:34.138565] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:37.578 [2024-11-27 04:37:34.138578] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:37.578 [2024-11-27 04:37:34.138669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.578 pt2 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.578 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.836 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.836 "name": "raid_bdev1", 00:20:37.836 "uuid": "32a77317-55a6-405a-a60c-e559482f63d1", 00:20:37.836 "strip_size_kb": 0, 00:20:37.836 "state": "online", 00:20:37.836 "raid_level": "raid1", 00:20:37.836 "superblock": true, 00:20:37.836 "num_base_bdevs": 2, 00:20:37.836 "num_base_bdevs_discovered": 1, 00:20:37.836 "num_base_bdevs_operational": 1, 00:20:37.836 "base_bdevs_list": [ 00:20:37.836 { 00:20:37.836 "name": null, 00:20:37.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.836 "is_configured": false, 00:20:37.836 "data_offset": 256, 00:20:37.836 "data_size": 7936 00:20:37.836 }, 00:20:37.836 { 00:20:37.836 "name": "pt2", 00:20:37.836 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:37.836 "is_configured": true, 00:20:37.836 "data_offset": 256, 00:20:37.836 "data_size": 7936 00:20:37.836 } 00:20:37.836 ] 00:20:37.836 }' 00:20:37.836 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.836 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:38.096 [2024-11-27 04:37:34.531216] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:38.096 [2024-11-27 04:37:34.531252] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:38.096 [2024-11-27 04:37:34.531342] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:38.096 [2024-11-27 04:37:34.531400] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:38.096 [2024-11-27 04:37:34.531411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:38.096 [2024-11-27 04:37:34.595259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:38.096 [2024-11-27 04:37:34.595401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:38.096 [2024-11-27 04:37:34.595462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:38.096 [2024-11-27 04:37:34.595514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:38.096 [2024-11-27 04:37:34.597911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:38.096 [2024-11-27 04:37:34.598008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:38.096 [2024-11-27 04:37:34.598141] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:38.096 [2024-11-27 04:37:34.598240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:38.096 [2024-11-27 04:37:34.598404] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:38.096 [2024-11-27 04:37:34.598468] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:38.096 [2024-11-27 04:37:34.598523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:38.096 [2024-11-27 04:37:34.598646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:38.096 [2024-11-27 04:37:34.598786] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:38.096 [2024-11-27 04:37:34.598828] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:38.096 [2024-11-27 04:37:34.598937] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:38.096 [2024-11-27 04:37:34.599047] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:38.096 [2024-11-27 04:37:34.599105] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:38.096 [2024-11-27 04:37:34.599270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:38.096 pt1 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.096 "name": "raid_bdev1", 00:20:38.096 "uuid": "32a77317-55a6-405a-a60c-e559482f63d1", 00:20:38.096 "strip_size_kb": 0, 00:20:38.096 "state": "online", 00:20:38.096 "raid_level": "raid1", 00:20:38.096 "superblock": true, 00:20:38.096 "num_base_bdevs": 2, 00:20:38.096 "num_base_bdevs_discovered": 1, 00:20:38.096 "num_base_bdevs_operational": 1, 00:20:38.096 "base_bdevs_list": [ 00:20:38.096 { 00:20:38.096 "name": null, 00:20:38.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.096 "is_configured": false, 00:20:38.096 "data_offset": 256, 00:20:38.096 "data_size": 7936 00:20:38.096 }, 00:20:38.096 { 00:20:38.096 "name": "pt2", 00:20:38.096 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:38.096 "is_configured": true, 00:20:38.096 "data_offset": 256, 00:20:38.096 "data_size": 7936 00:20:38.096 } 00:20:38.096 ] 00:20:38.096 }' 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.096 04:37:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:38.664 04:37:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:38.664 04:37:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.664 04:37:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:38.664 04:37:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:38.664 04:37:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.664 04:37:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:38.664 04:37:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:38.664 04:37:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:38.664 04:37:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.664 04:37:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:38.664 [2024-11-27 04:37:35.071420] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:38.664 04:37:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.664 04:37:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 32a77317-55a6-405a-a60c-e559482f63d1 '!=' 32a77317-55a6-405a-a60c-e559482f63d1 ']' 00:20:38.664 04:37:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89145 00:20:38.664 04:37:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89145 ']' 00:20:38.664 04:37:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89145 00:20:38.664 04:37:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:38.664 04:37:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.664 04:37:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89145 00:20:38.664 killing process with pid 89145 00:20:38.664 04:37:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:38.664 04:37:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:38.664 04:37:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89145' 00:20:38.665 04:37:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89145 00:20:38.665 [2024-11-27 04:37:35.145518] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:38.665 [2024-11-27 04:37:35.145617] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:38.665 [2024-11-27 04:37:35.145665] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:38.665 [2024-11-27 04:37:35.145680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:38.665 04:37:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89145 00:20:38.925 [2024-11-27 04:37:35.355510] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:40.305 04:37:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:20:40.305 00:20:40.305 real 0m6.232s 00:20:40.305 user 0m9.443s 00:20:40.305 sys 0m1.073s 00:20:40.305 ************************************ 00:20:40.305 END TEST raid_superblock_test_md_interleaved 00:20:40.305 ************************************ 00:20:40.305 04:37:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:40.305 04:37:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:40.305 04:37:36 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:20:40.305 04:37:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:40.305 04:37:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:40.305 04:37:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:40.305 ************************************ 00:20:40.305 START TEST raid_rebuild_test_sb_md_interleaved 00:20:40.305 ************************************ 00:20:40.305 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:20:40.305 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:40.305 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:40.305 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:40.305 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:40.305 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:20:40.305 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:40.305 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:40.305 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:40.305 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:40.305 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:40.305 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:40.305 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:40.305 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:40.305 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:40.305 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:40.305 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:40.305 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:40.305 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:40.305 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:40.305 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:40.305 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:40.305 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:40.305 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:40.305 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:40.306 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89472 00:20:40.306 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:40.306 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89472 00:20:40.306 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89472 ']' 00:20:40.306 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.306 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:40.306 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.306 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:40.306 04:37:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:40.306 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:40.306 Zero copy mechanism will not be used. 00:20:40.306 [2024-11-27 04:37:36.693284] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:40.306 [2024-11-27 04:37:36.693431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89472 ] 00:20:40.306 [2024-11-27 04:37:36.868126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.565 [2024-11-27 04:37:36.986388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.825 [2024-11-27 04:37:37.190377] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:40.825 [2024-11-27 04:37:37.190444] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:41.105 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.105 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:41.105 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:41.105 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:20:41.105 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.105 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:41.105 BaseBdev1_malloc 00:20:41.105 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.105 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:41.105 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.105 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:41.105 [2024-11-27 04:37:37.614671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:41.105 [2024-11-27 04:37:37.614735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.105 [2024-11-27 04:37:37.614757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:41.105 [2024-11-27 04:37:37.614769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.105 [2024-11-27 04:37:37.616752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.105 [2024-11-27 04:37:37.616795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:41.105 BaseBdev1 00:20:41.105 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.105 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:41.105 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:20:41.105 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.105 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:41.105 BaseBdev2_malloc 00:20:41.105 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.105 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:41.105 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.105 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:41.105 [2024-11-27 04:37:37.669151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:41.105 [2024-11-27 04:37:37.669212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.105 [2024-11-27 04:37:37.669231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:41.105 [2024-11-27 04:37:37.669244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.105 [2024-11-27 04:37:37.671156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.105 [2024-11-27 04:37:37.671189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:41.105 BaseBdev2 00:20:41.105 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.105 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:20:41.105 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.105 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:41.365 spare_malloc 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:41.365 spare_delay 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:41.365 [2024-11-27 04:37:37.750380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:41.365 [2024-11-27 04:37:37.750449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.365 [2024-11-27 04:37:37.750474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:41.365 [2024-11-27 04:37:37.750487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.365 [2024-11-27 04:37:37.752618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.365 spare 00:20:41.365 [2024-11-27 04:37:37.752722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:41.365 [2024-11-27 04:37:37.758428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:41.365 [2024-11-27 04:37:37.760501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:41.365 [2024-11-27 04:37:37.760769] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:41.365 [2024-11-27 04:37:37.760826] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:41.365 [2024-11-27 04:37:37.760941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:41.365 [2024-11-27 04:37:37.761061] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:41.365 [2024-11-27 04:37:37.761116] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:41.365 [2024-11-27 04:37:37.761241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.365 "name": "raid_bdev1", 00:20:41.365 "uuid": "9afb2507-09ea-4c58-9efc-af1985720d6c", 00:20:41.365 "strip_size_kb": 0, 00:20:41.365 "state": "online", 00:20:41.365 "raid_level": "raid1", 00:20:41.365 "superblock": true, 00:20:41.365 "num_base_bdevs": 2, 00:20:41.365 "num_base_bdevs_discovered": 2, 00:20:41.365 "num_base_bdevs_operational": 2, 00:20:41.365 "base_bdevs_list": [ 00:20:41.365 { 00:20:41.365 "name": "BaseBdev1", 00:20:41.365 "uuid": "9f7ca3df-e787-5baa-97fb-593868a75a1c", 00:20:41.365 "is_configured": true, 00:20:41.365 "data_offset": 256, 00:20:41.365 "data_size": 7936 00:20:41.365 }, 00:20:41.365 { 00:20:41.365 "name": "BaseBdev2", 00:20:41.365 "uuid": "61aa101f-800a-5983-a067-015b4c89daa3", 00:20:41.365 "is_configured": true, 00:20:41.365 "data_offset": 256, 00:20:41.365 "data_size": 7936 00:20:41.365 } 00:20:41.365 ] 00:20:41.365 }' 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.365 04:37:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:41.934 [2024-11-27 04:37:38.226226] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:41.934 [2024-11-27 04:37:38.313685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.934 "name": "raid_bdev1", 00:20:41.934 "uuid": "9afb2507-09ea-4c58-9efc-af1985720d6c", 00:20:41.934 "strip_size_kb": 0, 00:20:41.934 "state": "online", 00:20:41.934 "raid_level": "raid1", 00:20:41.934 "superblock": true, 00:20:41.934 "num_base_bdevs": 2, 00:20:41.934 "num_base_bdevs_discovered": 1, 00:20:41.934 "num_base_bdevs_operational": 1, 00:20:41.934 "base_bdevs_list": [ 00:20:41.934 { 00:20:41.934 "name": null, 00:20:41.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.934 "is_configured": false, 00:20:41.934 "data_offset": 0, 00:20:41.934 "data_size": 7936 00:20:41.934 }, 00:20:41.934 { 00:20:41.934 "name": "BaseBdev2", 00:20:41.934 "uuid": "61aa101f-800a-5983-a067-015b4c89daa3", 00:20:41.934 "is_configured": true, 00:20:41.934 "data_offset": 256, 00:20:41.934 "data_size": 7936 00:20:41.934 } 00:20:41.934 ] 00:20:41.934 }' 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.934 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:42.193 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:42.193 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.193 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:42.193 [2024-11-27 04:37:38.756999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:42.193 [2024-11-27 04:37:38.775682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:42.193 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.193 04:37:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:42.193 [2024-11-27 04:37:38.777767] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:43.572 04:37:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:43.572 04:37:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:43.572 04:37:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:43.572 04:37:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:43.572 04:37:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:43.572 04:37:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.572 04:37:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.572 04:37:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.572 04:37:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.572 04:37:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.572 04:37:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:43.572 "name": "raid_bdev1", 00:20:43.572 "uuid": "9afb2507-09ea-4c58-9efc-af1985720d6c", 00:20:43.572 "strip_size_kb": 0, 00:20:43.572 "state": "online", 00:20:43.572 "raid_level": "raid1", 00:20:43.572 "superblock": true, 00:20:43.572 "num_base_bdevs": 2, 00:20:43.572 "num_base_bdevs_discovered": 2, 00:20:43.572 "num_base_bdevs_operational": 2, 00:20:43.572 "process": { 00:20:43.572 "type": "rebuild", 00:20:43.572 "target": "spare", 00:20:43.572 "progress": { 00:20:43.572 "blocks": 2560, 00:20:43.572 "percent": 32 00:20:43.572 } 00:20:43.572 }, 00:20:43.572 "base_bdevs_list": [ 00:20:43.572 { 00:20:43.572 "name": "spare", 00:20:43.572 "uuid": "22216293-3651-54e0-84cf-08c76fca4710", 00:20:43.572 "is_configured": true, 00:20:43.572 "data_offset": 256, 00:20:43.572 "data_size": 7936 00:20:43.572 }, 00:20:43.572 { 00:20:43.572 "name": "BaseBdev2", 00:20:43.572 "uuid": "61aa101f-800a-5983-a067-015b4c89daa3", 00:20:43.572 "is_configured": true, 00:20:43.572 "data_offset": 256, 00:20:43.572 "data_size": 7936 00:20:43.572 } 00:20:43.572 ] 00:20:43.572 }' 00:20:43.572 04:37:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:43.572 04:37:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:43.572 04:37:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:43.572 04:37:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:43.572 04:37:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:43.572 04:37:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.572 04:37:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.572 [2024-11-27 04:37:39.937299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:43.572 [2024-11-27 04:37:39.983681] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:43.572 [2024-11-27 04:37:39.983848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:43.572 [2024-11-27 04:37:39.983895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:43.572 [2024-11-27 04:37:39.983927] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:43.572 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.572 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:43.572 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:43.572 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:43.572 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:43.572 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:43.572 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:43.572 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.573 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.573 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.573 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.573 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.573 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.573 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.573 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.573 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.573 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.573 "name": "raid_bdev1", 00:20:43.573 "uuid": "9afb2507-09ea-4c58-9efc-af1985720d6c", 00:20:43.573 "strip_size_kb": 0, 00:20:43.573 "state": "online", 00:20:43.573 "raid_level": "raid1", 00:20:43.573 "superblock": true, 00:20:43.573 "num_base_bdevs": 2, 00:20:43.573 "num_base_bdevs_discovered": 1, 00:20:43.573 "num_base_bdevs_operational": 1, 00:20:43.573 "base_bdevs_list": [ 00:20:43.573 { 00:20:43.573 "name": null, 00:20:43.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.573 "is_configured": false, 00:20:43.573 "data_offset": 0, 00:20:43.573 "data_size": 7936 00:20:43.573 }, 00:20:43.573 { 00:20:43.573 "name": "BaseBdev2", 00:20:43.573 "uuid": "61aa101f-800a-5983-a067-015b4c89daa3", 00:20:43.573 "is_configured": true, 00:20:43.573 "data_offset": 256, 00:20:43.573 "data_size": 7936 00:20:43.573 } 00:20:43.573 ] 00:20:43.573 }' 00:20:43.573 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.573 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:44.141 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:44.142 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:44.142 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:44.142 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:44.142 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:44.142 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.142 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.142 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.142 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:44.142 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.142 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:44.142 "name": "raid_bdev1", 00:20:44.142 "uuid": "9afb2507-09ea-4c58-9efc-af1985720d6c", 00:20:44.142 "strip_size_kb": 0, 00:20:44.142 "state": "online", 00:20:44.142 "raid_level": "raid1", 00:20:44.142 "superblock": true, 00:20:44.142 "num_base_bdevs": 2, 00:20:44.142 "num_base_bdevs_discovered": 1, 00:20:44.142 "num_base_bdevs_operational": 1, 00:20:44.142 "base_bdevs_list": [ 00:20:44.142 { 00:20:44.142 "name": null, 00:20:44.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.142 "is_configured": false, 00:20:44.142 "data_offset": 0, 00:20:44.142 "data_size": 7936 00:20:44.142 }, 00:20:44.142 { 00:20:44.142 "name": "BaseBdev2", 00:20:44.142 "uuid": "61aa101f-800a-5983-a067-015b4c89daa3", 00:20:44.142 "is_configured": true, 00:20:44.142 "data_offset": 256, 00:20:44.142 "data_size": 7936 00:20:44.142 } 00:20:44.142 ] 00:20:44.142 }' 00:20:44.142 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:44.142 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:44.142 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:44.142 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:44.142 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:44.142 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.142 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:44.142 [2024-11-27 04:37:40.575578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:44.142 [2024-11-27 04:37:40.593121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:44.142 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.142 04:37:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:44.142 [2024-11-27 04:37:40.595171] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:45.108 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:45.108 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:45.108 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:45.108 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:45.108 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:45.108 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.108 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.108 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.108 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:45.108 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.108 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:45.108 "name": "raid_bdev1", 00:20:45.108 "uuid": "9afb2507-09ea-4c58-9efc-af1985720d6c", 00:20:45.108 "strip_size_kb": 0, 00:20:45.108 "state": "online", 00:20:45.108 "raid_level": "raid1", 00:20:45.108 "superblock": true, 00:20:45.108 "num_base_bdevs": 2, 00:20:45.108 "num_base_bdevs_discovered": 2, 00:20:45.108 "num_base_bdevs_operational": 2, 00:20:45.108 "process": { 00:20:45.108 "type": "rebuild", 00:20:45.108 "target": "spare", 00:20:45.108 "progress": { 00:20:45.108 "blocks": 2560, 00:20:45.108 "percent": 32 00:20:45.108 } 00:20:45.108 }, 00:20:45.108 "base_bdevs_list": [ 00:20:45.108 { 00:20:45.108 "name": "spare", 00:20:45.108 "uuid": "22216293-3651-54e0-84cf-08c76fca4710", 00:20:45.108 "is_configured": true, 00:20:45.108 "data_offset": 256, 00:20:45.108 "data_size": 7936 00:20:45.108 }, 00:20:45.108 { 00:20:45.108 "name": "BaseBdev2", 00:20:45.108 "uuid": "61aa101f-800a-5983-a067-015b4c89daa3", 00:20:45.108 "is_configured": true, 00:20:45.108 "data_offset": 256, 00:20:45.108 "data_size": 7936 00:20:45.108 } 00:20:45.108 ] 00:20:45.108 }' 00:20:45.108 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:45.108 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:45.108 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:45.368 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:45.368 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:45.368 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:45.368 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:45.368 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:45.368 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:45.368 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:45.368 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=769 00:20:45.368 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:45.368 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:45.368 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:45.368 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:45.368 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:45.368 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:45.368 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.368 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.368 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:45.368 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.368 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.368 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:45.368 "name": "raid_bdev1", 00:20:45.368 "uuid": "9afb2507-09ea-4c58-9efc-af1985720d6c", 00:20:45.368 "strip_size_kb": 0, 00:20:45.368 "state": "online", 00:20:45.368 "raid_level": "raid1", 00:20:45.368 "superblock": true, 00:20:45.368 "num_base_bdevs": 2, 00:20:45.368 "num_base_bdevs_discovered": 2, 00:20:45.368 "num_base_bdevs_operational": 2, 00:20:45.368 "process": { 00:20:45.368 "type": "rebuild", 00:20:45.368 "target": "spare", 00:20:45.368 "progress": { 00:20:45.368 "blocks": 2816, 00:20:45.368 "percent": 35 00:20:45.368 } 00:20:45.368 }, 00:20:45.368 "base_bdevs_list": [ 00:20:45.368 { 00:20:45.368 "name": "spare", 00:20:45.368 "uuid": "22216293-3651-54e0-84cf-08c76fca4710", 00:20:45.368 "is_configured": true, 00:20:45.368 "data_offset": 256, 00:20:45.368 "data_size": 7936 00:20:45.368 }, 00:20:45.368 { 00:20:45.368 "name": "BaseBdev2", 00:20:45.368 "uuid": "61aa101f-800a-5983-a067-015b4c89daa3", 00:20:45.368 "is_configured": true, 00:20:45.368 "data_offset": 256, 00:20:45.368 "data_size": 7936 00:20:45.368 } 00:20:45.368 ] 00:20:45.368 }' 00:20:45.368 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:45.368 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:45.368 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:45.368 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:45.368 04:37:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:46.306 04:37:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:46.306 04:37:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:46.306 04:37:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:46.306 04:37:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:46.306 04:37:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:46.306 04:37:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:46.306 04:37:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.306 04:37:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.306 04:37:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.306 04:37:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:46.566 04:37:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.566 04:37:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:46.566 "name": "raid_bdev1", 00:20:46.566 "uuid": "9afb2507-09ea-4c58-9efc-af1985720d6c", 00:20:46.566 "strip_size_kb": 0, 00:20:46.566 "state": "online", 00:20:46.566 "raid_level": "raid1", 00:20:46.566 "superblock": true, 00:20:46.566 "num_base_bdevs": 2, 00:20:46.566 "num_base_bdevs_discovered": 2, 00:20:46.566 "num_base_bdevs_operational": 2, 00:20:46.566 "process": { 00:20:46.566 "type": "rebuild", 00:20:46.566 "target": "spare", 00:20:46.566 "progress": { 00:20:46.566 "blocks": 5632, 00:20:46.566 "percent": 70 00:20:46.566 } 00:20:46.566 }, 00:20:46.566 "base_bdevs_list": [ 00:20:46.566 { 00:20:46.566 "name": "spare", 00:20:46.566 "uuid": "22216293-3651-54e0-84cf-08c76fca4710", 00:20:46.566 "is_configured": true, 00:20:46.566 "data_offset": 256, 00:20:46.566 "data_size": 7936 00:20:46.566 }, 00:20:46.566 { 00:20:46.566 "name": "BaseBdev2", 00:20:46.566 "uuid": "61aa101f-800a-5983-a067-015b4c89daa3", 00:20:46.566 "is_configured": true, 00:20:46.566 "data_offset": 256, 00:20:46.566 "data_size": 7936 00:20:46.566 } 00:20:46.566 ] 00:20:46.566 }' 00:20:46.566 04:37:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:46.566 04:37:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:46.566 04:37:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:46.566 04:37:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:46.566 04:37:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:47.135 [2024-11-27 04:37:43.710158] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:47.135 [2024-11-27 04:37:43.710335] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:47.135 [2024-11-27 04:37:43.710491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:47.705 "name": "raid_bdev1", 00:20:47.705 "uuid": "9afb2507-09ea-4c58-9efc-af1985720d6c", 00:20:47.705 "strip_size_kb": 0, 00:20:47.705 "state": "online", 00:20:47.705 "raid_level": "raid1", 00:20:47.705 "superblock": true, 00:20:47.705 "num_base_bdevs": 2, 00:20:47.705 "num_base_bdevs_discovered": 2, 00:20:47.705 "num_base_bdevs_operational": 2, 00:20:47.705 "base_bdevs_list": [ 00:20:47.705 { 00:20:47.705 "name": "spare", 00:20:47.705 "uuid": "22216293-3651-54e0-84cf-08c76fca4710", 00:20:47.705 "is_configured": true, 00:20:47.705 "data_offset": 256, 00:20:47.705 "data_size": 7936 00:20:47.705 }, 00:20:47.705 { 00:20:47.705 "name": "BaseBdev2", 00:20:47.705 "uuid": "61aa101f-800a-5983-a067-015b4c89daa3", 00:20:47.705 "is_configured": true, 00:20:47.705 "data_offset": 256, 00:20:47.705 "data_size": 7936 00:20:47.705 } 00:20:47.705 ] 00:20:47.705 }' 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:47.705 "name": "raid_bdev1", 00:20:47.705 "uuid": "9afb2507-09ea-4c58-9efc-af1985720d6c", 00:20:47.705 "strip_size_kb": 0, 00:20:47.705 "state": "online", 00:20:47.705 "raid_level": "raid1", 00:20:47.705 "superblock": true, 00:20:47.705 "num_base_bdevs": 2, 00:20:47.705 "num_base_bdevs_discovered": 2, 00:20:47.705 "num_base_bdevs_operational": 2, 00:20:47.705 "base_bdevs_list": [ 00:20:47.705 { 00:20:47.705 "name": "spare", 00:20:47.705 "uuid": "22216293-3651-54e0-84cf-08c76fca4710", 00:20:47.705 "is_configured": true, 00:20:47.705 "data_offset": 256, 00:20:47.705 "data_size": 7936 00:20:47.705 }, 00:20:47.705 { 00:20:47.705 "name": "BaseBdev2", 00:20:47.705 "uuid": "61aa101f-800a-5983-a067-015b4c89daa3", 00:20:47.705 "is_configured": true, 00:20:47.705 "data_offset": 256, 00:20:47.705 "data_size": 7936 00:20:47.705 } 00:20:47.705 ] 00:20:47.705 }' 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:47.705 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:47.965 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:47.965 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:47.965 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:47.965 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:47.965 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:47.965 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:47.965 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.965 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.965 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.965 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.965 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.965 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.965 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.965 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:47.965 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.965 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.965 "name": "raid_bdev1", 00:20:47.965 "uuid": "9afb2507-09ea-4c58-9efc-af1985720d6c", 00:20:47.965 "strip_size_kb": 0, 00:20:47.965 "state": "online", 00:20:47.965 "raid_level": "raid1", 00:20:47.965 "superblock": true, 00:20:47.965 "num_base_bdevs": 2, 00:20:47.965 "num_base_bdevs_discovered": 2, 00:20:47.965 "num_base_bdevs_operational": 2, 00:20:47.965 "base_bdevs_list": [ 00:20:47.965 { 00:20:47.965 "name": "spare", 00:20:47.965 "uuid": "22216293-3651-54e0-84cf-08c76fca4710", 00:20:47.965 "is_configured": true, 00:20:47.965 "data_offset": 256, 00:20:47.965 "data_size": 7936 00:20:47.965 }, 00:20:47.965 { 00:20:47.965 "name": "BaseBdev2", 00:20:47.965 "uuid": "61aa101f-800a-5983-a067-015b4c89daa3", 00:20:47.965 "is_configured": true, 00:20:47.965 "data_offset": 256, 00:20:47.965 "data_size": 7936 00:20:47.965 } 00:20:47.965 ] 00:20:47.965 }' 00:20:47.965 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.965 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:48.224 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:48.224 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.224 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:48.224 [2024-11-27 04:37:44.738016] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:48.224 [2024-11-27 04:37:44.738108] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:48.224 [2024-11-27 04:37:44.738212] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:48.224 [2024-11-27 04:37:44.738278] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:48.224 [2024-11-27 04:37:44.738288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:48.224 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.225 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.225 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:20:48.225 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.225 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:48.225 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.225 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:48.225 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:20:48.225 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:48.225 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:48.225 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.225 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:48.225 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.225 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:48.225 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.225 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:48.485 [2024-11-27 04:37:44.809855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:48.485 [2024-11-27 04:37:44.809975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.485 [2024-11-27 04:37:44.810002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:48.485 [2024-11-27 04:37:44.810011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.485 [2024-11-27 04:37:44.812131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.485 [2024-11-27 04:37:44.812169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:48.485 [2024-11-27 04:37:44.812230] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:48.485 [2024-11-27 04:37:44.812300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:48.485 [2024-11-27 04:37:44.812425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:48.485 spare 00:20:48.485 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.485 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:48.485 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.485 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:48.485 [2024-11-27 04:37:44.912345] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:48.485 [2024-11-27 04:37:44.912378] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:48.485 [2024-11-27 04:37:44.912494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:48.485 [2024-11-27 04:37:44.912596] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:48.485 [2024-11-27 04:37:44.912606] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:48.485 [2024-11-27 04:37:44.912701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:48.485 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.485 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:48.485 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:48.485 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:48.485 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:48.485 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:48.485 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:48.485 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:48.485 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:48.485 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:48.485 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.485 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.485 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.485 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.485 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:48.485 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.485 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:48.485 "name": "raid_bdev1", 00:20:48.485 "uuid": "9afb2507-09ea-4c58-9efc-af1985720d6c", 00:20:48.485 "strip_size_kb": 0, 00:20:48.485 "state": "online", 00:20:48.485 "raid_level": "raid1", 00:20:48.485 "superblock": true, 00:20:48.485 "num_base_bdevs": 2, 00:20:48.485 "num_base_bdevs_discovered": 2, 00:20:48.485 "num_base_bdevs_operational": 2, 00:20:48.485 "base_bdevs_list": [ 00:20:48.485 { 00:20:48.485 "name": "spare", 00:20:48.485 "uuid": "22216293-3651-54e0-84cf-08c76fca4710", 00:20:48.485 "is_configured": true, 00:20:48.485 "data_offset": 256, 00:20:48.485 "data_size": 7936 00:20:48.485 }, 00:20:48.485 { 00:20:48.485 "name": "BaseBdev2", 00:20:48.485 "uuid": "61aa101f-800a-5983-a067-015b4c89daa3", 00:20:48.485 "is_configured": true, 00:20:48.485 "data_offset": 256, 00:20:48.485 "data_size": 7936 00:20:48.485 } 00:20:48.485 ] 00:20:48.485 }' 00:20:48.485 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:48.485 04:37:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:49.053 "name": "raid_bdev1", 00:20:49.053 "uuid": "9afb2507-09ea-4c58-9efc-af1985720d6c", 00:20:49.053 "strip_size_kb": 0, 00:20:49.053 "state": "online", 00:20:49.053 "raid_level": "raid1", 00:20:49.053 "superblock": true, 00:20:49.053 "num_base_bdevs": 2, 00:20:49.053 "num_base_bdevs_discovered": 2, 00:20:49.053 "num_base_bdevs_operational": 2, 00:20:49.053 "base_bdevs_list": [ 00:20:49.053 { 00:20:49.053 "name": "spare", 00:20:49.053 "uuid": "22216293-3651-54e0-84cf-08c76fca4710", 00:20:49.053 "is_configured": true, 00:20:49.053 "data_offset": 256, 00:20:49.053 "data_size": 7936 00:20:49.053 }, 00:20:49.053 { 00:20:49.053 "name": "BaseBdev2", 00:20:49.053 "uuid": "61aa101f-800a-5983-a067-015b4c89daa3", 00:20:49.053 "is_configured": true, 00:20:49.053 "data_offset": 256, 00:20:49.053 "data_size": 7936 00:20:49.053 } 00:20:49.053 ] 00:20:49.053 }' 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.053 [2024-11-27 04:37:45.592613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.053 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.315 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:49.315 "name": "raid_bdev1", 00:20:49.315 "uuid": "9afb2507-09ea-4c58-9efc-af1985720d6c", 00:20:49.315 "strip_size_kb": 0, 00:20:49.315 "state": "online", 00:20:49.315 "raid_level": "raid1", 00:20:49.315 "superblock": true, 00:20:49.315 "num_base_bdevs": 2, 00:20:49.315 "num_base_bdevs_discovered": 1, 00:20:49.315 "num_base_bdevs_operational": 1, 00:20:49.315 "base_bdevs_list": [ 00:20:49.315 { 00:20:49.315 "name": null, 00:20:49.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.315 "is_configured": false, 00:20:49.315 "data_offset": 0, 00:20:49.315 "data_size": 7936 00:20:49.315 }, 00:20:49.315 { 00:20:49.315 "name": "BaseBdev2", 00:20:49.315 "uuid": "61aa101f-800a-5983-a067-015b4c89daa3", 00:20:49.315 "is_configured": true, 00:20:49.315 "data_offset": 256, 00:20:49.315 "data_size": 7936 00:20:49.315 } 00:20:49.315 ] 00:20:49.315 }' 00:20:49.315 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:49.315 04:37:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.580 04:37:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:49.580 04:37:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.580 04:37:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.580 [2024-11-27 04:37:46.055821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:49.580 [2024-11-27 04:37:46.056079] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:49.580 [2024-11-27 04:37:46.056158] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:49.580 [2024-11-27 04:37:46.056224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:49.580 [2024-11-27 04:37:46.072394] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:49.580 04:37:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.580 04:37:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:49.580 [2024-11-27 04:37:46.074380] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:50.517 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:50.517 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:50.517 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:50.517 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:50.517 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:50.517 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.517 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.517 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:50.517 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.517 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.776 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:50.776 "name": "raid_bdev1", 00:20:50.776 "uuid": "9afb2507-09ea-4c58-9efc-af1985720d6c", 00:20:50.776 "strip_size_kb": 0, 00:20:50.776 "state": "online", 00:20:50.776 "raid_level": "raid1", 00:20:50.776 "superblock": true, 00:20:50.776 "num_base_bdevs": 2, 00:20:50.776 "num_base_bdevs_discovered": 2, 00:20:50.776 "num_base_bdevs_operational": 2, 00:20:50.776 "process": { 00:20:50.776 "type": "rebuild", 00:20:50.776 "target": "spare", 00:20:50.776 "progress": { 00:20:50.776 "blocks": 2560, 00:20:50.776 "percent": 32 00:20:50.776 } 00:20:50.776 }, 00:20:50.776 "base_bdevs_list": [ 00:20:50.776 { 00:20:50.776 "name": "spare", 00:20:50.776 "uuid": "22216293-3651-54e0-84cf-08c76fca4710", 00:20:50.776 "is_configured": true, 00:20:50.776 "data_offset": 256, 00:20:50.776 "data_size": 7936 00:20:50.776 }, 00:20:50.776 { 00:20:50.776 "name": "BaseBdev2", 00:20:50.776 "uuid": "61aa101f-800a-5983-a067-015b4c89daa3", 00:20:50.776 "is_configured": true, 00:20:50.776 "data_offset": 256, 00:20:50.776 "data_size": 7936 00:20:50.776 } 00:20:50.776 ] 00:20:50.776 }' 00:20:50.776 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:50.776 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:50.776 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:50.776 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:50.776 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:50.776 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.776 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:50.776 [2024-11-27 04:37:47.233850] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:50.776 [2024-11-27 04:37:47.280177] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:50.776 [2024-11-27 04:37:47.280365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.776 [2024-11-27 04:37:47.280416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:50.776 [2024-11-27 04:37:47.280454] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:50.776 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.776 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:50.776 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:50.776 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:50.776 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:50.776 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:50.776 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:50.776 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:50.776 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:50.776 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:50.776 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:50.776 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.776 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.776 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.776 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:50.776 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.035 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:51.035 "name": "raid_bdev1", 00:20:51.035 "uuid": "9afb2507-09ea-4c58-9efc-af1985720d6c", 00:20:51.035 "strip_size_kb": 0, 00:20:51.035 "state": "online", 00:20:51.035 "raid_level": "raid1", 00:20:51.035 "superblock": true, 00:20:51.035 "num_base_bdevs": 2, 00:20:51.035 "num_base_bdevs_discovered": 1, 00:20:51.035 "num_base_bdevs_operational": 1, 00:20:51.035 "base_bdevs_list": [ 00:20:51.035 { 00:20:51.035 "name": null, 00:20:51.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.035 "is_configured": false, 00:20:51.035 "data_offset": 0, 00:20:51.035 "data_size": 7936 00:20:51.035 }, 00:20:51.035 { 00:20:51.035 "name": "BaseBdev2", 00:20:51.035 "uuid": "61aa101f-800a-5983-a067-015b4c89daa3", 00:20:51.035 "is_configured": true, 00:20:51.035 "data_offset": 256, 00:20:51.035 "data_size": 7936 00:20:51.035 } 00:20:51.035 ] 00:20:51.035 }' 00:20:51.035 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:51.035 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:51.295 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:51.295 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.295 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:51.295 [2024-11-27 04:37:47.775258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:51.295 [2024-11-27 04:37:47.775422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:51.295 [2024-11-27 04:37:47.775458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:51.295 [2024-11-27 04:37:47.775471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:51.295 [2024-11-27 04:37:47.775692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:51.295 [2024-11-27 04:37:47.775709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:51.295 [2024-11-27 04:37:47.775775] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:51.295 [2024-11-27 04:37:47.775790] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:51.295 [2024-11-27 04:37:47.775801] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:51.295 [2024-11-27 04:37:47.775825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:51.295 [2024-11-27 04:37:47.792918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:51.295 spare 00:20:51.295 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.295 04:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:51.295 [2024-11-27 04:37:47.794991] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:52.232 04:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:52.232 04:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:52.232 04:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:52.232 04:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:52.232 04:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:52.232 04:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.232 04:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.232 04:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.232 04:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:52.491 04:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.491 04:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:52.491 "name": "raid_bdev1", 00:20:52.491 "uuid": "9afb2507-09ea-4c58-9efc-af1985720d6c", 00:20:52.491 "strip_size_kb": 0, 00:20:52.491 "state": "online", 00:20:52.491 "raid_level": "raid1", 00:20:52.491 "superblock": true, 00:20:52.491 "num_base_bdevs": 2, 00:20:52.491 "num_base_bdevs_discovered": 2, 00:20:52.491 "num_base_bdevs_operational": 2, 00:20:52.491 "process": { 00:20:52.491 "type": "rebuild", 00:20:52.491 "target": "spare", 00:20:52.491 "progress": { 00:20:52.491 "blocks": 2560, 00:20:52.491 "percent": 32 00:20:52.491 } 00:20:52.491 }, 00:20:52.491 "base_bdevs_list": [ 00:20:52.491 { 00:20:52.491 "name": "spare", 00:20:52.491 "uuid": "22216293-3651-54e0-84cf-08c76fca4710", 00:20:52.491 "is_configured": true, 00:20:52.491 "data_offset": 256, 00:20:52.491 "data_size": 7936 00:20:52.491 }, 00:20:52.491 { 00:20:52.491 "name": "BaseBdev2", 00:20:52.491 "uuid": "61aa101f-800a-5983-a067-015b4c89daa3", 00:20:52.491 "is_configured": true, 00:20:52.491 "data_offset": 256, 00:20:52.491 "data_size": 7936 00:20:52.491 } 00:20:52.491 ] 00:20:52.491 }' 00:20:52.491 04:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:52.491 04:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:52.491 04:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:52.491 04:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:52.491 04:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:52.491 04:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.491 04:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:52.491 [2024-11-27 04:37:48.954539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:52.491 [2024-11-27 04:37:49.000810] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:52.491 [2024-11-27 04:37:49.000959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:52.491 [2024-11-27 04:37:49.000998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:52.491 [2024-11-27 04:37:49.001019] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:52.491 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.491 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:52.491 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:52.491 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:52.491 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:52.491 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:52.491 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:52.491 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:52.491 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:52.491 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:52.491 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:52.491 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.491 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.492 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.492 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:52.492 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.756 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:52.756 "name": "raid_bdev1", 00:20:52.756 "uuid": "9afb2507-09ea-4c58-9efc-af1985720d6c", 00:20:52.756 "strip_size_kb": 0, 00:20:52.756 "state": "online", 00:20:52.756 "raid_level": "raid1", 00:20:52.756 "superblock": true, 00:20:52.756 "num_base_bdevs": 2, 00:20:52.756 "num_base_bdevs_discovered": 1, 00:20:52.756 "num_base_bdevs_operational": 1, 00:20:52.756 "base_bdevs_list": [ 00:20:52.756 { 00:20:52.756 "name": null, 00:20:52.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.756 "is_configured": false, 00:20:52.756 "data_offset": 0, 00:20:52.756 "data_size": 7936 00:20:52.756 }, 00:20:52.756 { 00:20:52.756 "name": "BaseBdev2", 00:20:52.756 "uuid": "61aa101f-800a-5983-a067-015b4c89daa3", 00:20:52.756 "is_configured": true, 00:20:52.756 "data_offset": 256, 00:20:52.756 "data_size": 7936 00:20:52.756 } 00:20:52.756 ] 00:20:52.756 }' 00:20:52.756 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:52.756 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:53.016 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:53.016 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:53.016 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:53.016 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:53.016 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:53.016 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.016 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.016 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.016 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:53.016 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.016 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:53.016 "name": "raid_bdev1", 00:20:53.016 "uuid": "9afb2507-09ea-4c58-9efc-af1985720d6c", 00:20:53.016 "strip_size_kb": 0, 00:20:53.016 "state": "online", 00:20:53.016 "raid_level": "raid1", 00:20:53.016 "superblock": true, 00:20:53.016 "num_base_bdevs": 2, 00:20:53.016 "num_base_bdevs_discovered": 1, 00:20:53.016 "num_base_bdevs_operational": 1, 00:20:53.016 "base_bdevs_list": [ 00:20:53.016 { 00:20:53.016 "name": null, 00:20:53.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.016 "is_configured": false, 00:20:53.016 "data_offset": 0, 00:20:53.016 "data_size": 7936 00:20:53.016 }, 00:20:53.016 { 00:20:53.016 "name": "BaseBdev2", 00:20:53.016 "uuid": "61aa101f-800a-5983-a067-015b4c89daa3", 00:20:53.016 "is_configured": true, 00:20:53.016 "data_offset": 256, 00:20:53.016 "data_size": 7936 00:20:53.016 } 00:20:53.016 ] 00:20:53.016 }' 00:20:53.016 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:53.016 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:53.016 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:53.016 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:53.016 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:53.016 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.016 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:53.275 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.275 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:53.275 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.275 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:53.275 [2024-11-27 04:37:49.609393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:53.275 [2024-11-27 04:37:49.609454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.275 [2024-11-27 04:37:49.609477] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:53.275 [2024-11-27 04:37:49.609487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.275 [2024-11-27 04:37:49.609670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.275 [2024-11-27 04:37:49.609684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:53.275 [2024-11-27 04:37:49.609736] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:53.275 [2024-11-27 04:37:49.609749] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:53.275 [2024-11-27 04:37:49.609758] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:53.275 [2024-11-27 04:37:49.609769] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:53.275 BaseBdev1 00:20:53.275 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.275 04:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:54.213 04:37:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:54.213 04:37:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:54.213 04:37:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:54.213 04:37:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:54.213 04:37:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:54.213 04:37:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:54.213 04:37:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:54.213 04:37:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:54.213 04:37:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:54.213 04:37:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:54.213 04:37:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.213 04:37:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.213 04:37:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.213 04:37:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:54.213 04:37:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.213 04:37:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:54.213 "name": "raid_bdev1", 00:20:54.213 "uuid": "9afb2507-09ea-4c58-9efc-af1985720d6c", 00:20:54.213 "strip_size_kb": 0, 00:20:54.213 "state": "online", 00:20:54.213 "raid_level": "raid1", 00:20:54.213 "superblock": true, 00:20:54.213 "num_base_bdevs": 2, 00:20:54.213 "num_base_bdevs_discovered": 1, 00:20:54.213 "num_base_bdevs_operational": 1, 00:20:54.213 "base_bdevs_list": [ 00:20:54.213 { 00:20:54.213 "name": null, 00:20:54.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.213 "is_configured": false, 00:20:54.213 "data_offset": 0, 00:20:54.213 "data_size": 7936 00:20:54.213 }, 00:20:54.213 { 00:20:54.213 "name": "BaseBdev2", 00:20:54.213 "uuid": "61aa101f-800a-5983-a067-015b4c89daa3", 00:20:54.213 "is_configured": true, 00:20:54.213 "data_offset": 256, 00:20:54.213 "data_size": 7936 00:20:54.213 } 00:20:54.213 ] 00:20:54.213 }' 00:20:54.213 04:37:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:54.213 04:37:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:54.782 "name": "raid_bdev1", 00:20:54.782 "uuid": "9afb2507-09ea-4c58-9efc-af1985720d6c", 00:20:54.782 "strip_size_kb": 0, 00:20:54.782 "state": "online", 00:20:54.782 "raid_level": "raid1", 00:20:54.782 "superblock": true, 00:20:54.782 "num_base_bdevs": 2, 00:20:54.782 "num_base_bdevs_discovered": 1, 00:20:54.782 "num_base_bdevs_operational": 1, 00:20:54.782 "base_bdevs_list": [ 00:20:54.782 { 00:20:54.782 "name": null, 00:20:54.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.782 "is_configured": false, 00:20:54.782 "data_offset": 0, 00:20:54.782 "data_size": 7936 00:20:54.782 }, 00:20:54.782 { 00:20:54.782 "name": "BaseBdev2", 00:20:54.782 "uuid": "61aa101f-800a-5983-a067-015b4c89daa3", 00:20:54.782 "is_configured": true, 00:20:54.782 "data_offset": 256, 00:20:54.782 "data_size": 7936 00:20:54.782 } 00:20:54.782 ] 00:20:54.782 }' 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:54.782 [2024-11-27 04:37:51.282888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:54.782 [2024-11-27 04:37:51.283142] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:54.782 [2024-11-27 04:37:51.283169] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:54.782 request: 00:20:54.782 { 00:20:54.782 "base_bdev": "BaseBdev1", 00:20:54.782 "raid_bdev": "raid_bdev1", 00:20:54.782 "method": "bdev_raid_add_base_bdev", 00:20:54.782 "req_id": 1 00:20:54.782 } 00:20:54.782 Got JSON-RPC error response 00:20:54.782 response: 00:20:54.782 { 00:20:54.782 "code": -22, 00:20:54.782 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:54.782 } 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:54.782 04:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:55.718 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:55.718 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:55.718 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:55.718 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:55.718 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:55.718 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:55.718 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.718 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.718 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.718 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.977 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.977 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.977 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.977 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:55.977 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.977 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.977 "name": "raid_bdev1", 00:20:55.977 "uuid": "9afb2507-09ea-4c58-9efc-af1985720d6c", 00:20:55.977 "strip_size_kb": 0, 00:20:55.977 "state": "online", 00:20:55.977 "raid_level": "raid1", 00:20:55.977 "superblock": true, 00:20:55.977 "num_base_bdevs": 2, 00:20:55.977 "num_base_bdevs_discovered": 1, 00:20:55.977 "num_base_bdevs_operational": 1, 00:20:55.977 "base_bdevs_list": [ 00:20:55.977 { 00:20:55.977 "name": null, 00:20:55.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.977 "is_configured": false, 00:20:55.977 "data_offset": 0, 00:20:55.977 "data_size": 7936 00:20:55.977 }, 00:20:55.977 { 00:20:55.977 "name": "BaseBdev2", 00:20:55.977 "uuid": "61aa101f-800a-5983-a067-015b4c89daa3", 00:20:55.977 "is_configured": true, 00:20:55.977 "data_offset": 256, 00:20:55.977 "data_size": 7936 00:20:55.977 } 00:20:55.977 ] 00:20:55.977 }' 00:20:55.977 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.977 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:56.237 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:56.237 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:56.237 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:56.237 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:56.237 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:56.237 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.237 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.237 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.237 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:56.495 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.495 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:56.495 "name": "raid_bdev1", 00:20:56.495 "uuid": "9afb2507-09ea-4c58-9efc-af1985720d6c", 00:20:56.495 "strip_size_kb": 0, 00:20:56.495 "state": "online", 00:20:56.495 "raid_level": "raid1", 00:20:56.495 "superblock": true, 00:20:56.495 "num_base_bdevs": 2, 00:20:56.495 "num_base_bdevs_discovered": 1, 00:20:56.495 "num_base_bdevs_operational": 1, 00:20:56.495 "base_bdevs_list": [ 00:20:56.495 { 00:20:56.495 "name": null, 00:20:56.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.495 "is_configured": false, 00:20:56.495 "data_offset": 0, 00:20:56.495 "data_size": 7936 00:20:56.495 }, 00:20:56.495 { 00:20:56.495 "name": "BaseBdev2", 00:20:56.495 "uuid": "61aa101f-800a-5983-a067-015b4c89daa3", 00:20:56.495 "is_configured": true, 00:20:56.495 "data_offset": 256, 00:20:56.495 "data_size": 7936 00:20:56.495 } 00:20:56.495 ] 00:20:56.495 }' 00:20:56.495 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:56.495 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:56.495 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:56.495 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:56.495 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89472 00:20:56.495 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89472 ']' 00:20:56.495 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89472 00:20:56.495 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:56.495 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.495 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89472 00:20:56.495 04:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:56.495 04:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:56.495 04:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89472' 00:20:56.495 killing process with pid 89472 00:20:56.495 04:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89472 00:20:56.495 Received shutdown signal, test time was about 60.000000 seconds 00:20:56.495 00:20:56.495 Latency(us) 00:20:56.495 [2024-11-27T04:37:53.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.495 [2024-11-27T04:37:53.082Z] =================================================================================================================== 00:20:56.495 [2024-11-27T04:37:53.082Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:56.495 [2024-11-27 04:37:53.002471] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:56.495 04:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89472 00:20:56.495 [2024-11-27 04:37:53.002643] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:56.495 [2024-11-27 04:37:53.002696] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:56.495 [2024-11-27 04:37:53.002709] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:57.062 [2024-11-27 04:37:53.373383] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:58.459 04:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:20:58.459 00:20:58.459 real 0m18.134s 00:20:58.459 user 0m23.792s 00:20:58.459 sys 0m1.708s 00:20:58.459 04:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:58.459 04:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:58.459 ************************************ 00:20:58.459 END TEST raid_rebuild_test_sb_md_interleaved 00:20:58.459 ************************************ 00:20:58.459 04:37:54 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:20:58.459 04:37:54 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:20:58.459 04:37:54 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89472 ']' 00:20:58.459 04:37:54 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89472 00:20:58.459 04:37:54 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:20:58.459 00:20:58.459 real 12m32.003s 00:20:58.459 user 16m53.481s 00:20:58.459 sys 1m57.224s 00:20:58.459 ************************************ 00:20:58.459 END TEST bdev_raid 00:20:58.459 ************************************ 00:20:58.459 04:37:54 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:58.459 04:37:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:58.459 04:37:54 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:58.459 04:37:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:58.459 04:37:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:58.459 04:37:54 -- common/autotest_common.sh@10 -- # set +x 00:20:58.459 ************************************ 00:20:58.459 START TEST spdkcli_raid 00:20:58.459 ************************************ 00:20:58.459 04:37:54 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:58.459 * Looking for test storage... 00:20:58.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:58.459 04:37:54 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:58.459 04:37:55 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:20:58.459 04:37:55 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:58.719 04:37:55 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:58.719 04:37:55 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:20:58.719 04:37:55 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:58.719 04:37:55 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:58.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.719 --rc genhtml_branch_coverage=1 00:20:58.719 --rc genhtml_function_coverage=1 00:20:58.719 --rc genhtml_legend=1 00:20:58.719 --rc geninfo_all_blocks=1 00:20:58.719 --rc geninfo_unexecuted_blocks=1 00:20:58.719 00:20:58.719 ' 00:20:58.719 04:37:55 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:58.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.719 --rc genhtml_branch_coverage=1 00:20:58.719 --rc genhtml_function_coverage=1 00:20:58.719 --rc genhtml_legend=1 00:20:58.719 --rc geninfo_all_blocks=1 00:20:58.719 --rc geninfo_unexecuted_blocks=1 00:20:58.719 00:20:58.719 ' 00:20:58.719 04:37:55 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:58.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.719 --rc genhtml_branch_coverage=1 00:20:58.719 --rc genhtml_function_coverage=1 00:20:58.719 --rc genhtml_legend=1 00:20:58.719 --rc geninfo_all_blocks=1 00:20:58.719 --rc geninfo_unexecuted_blocks=1 00:20:58.719 00:20:58.719 ' 00:20:58.719 04:37:55 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:58.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.719 --rc genhtml_branch_coverage=1 00:20:58.719 --rc genhtml_function_coverage=1 00:20:58.719 --rc genhtml_legend=1 00:20:58.719 --rc geninfo_all_blocks=1 00:20:58.719 --rc geninfo_unexecuted_blocks=1 00:20:58.719 00:20:58.719 ' 00:20:58.719 04:37:55 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:58.719 04:37:55 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:58.719 04:37:55 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:58.719 04:37:55 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:20:58.719 04:37:55 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:20:58.719 04:37:55 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:20:58.719 04:37:55 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:20:58.719 04:37:55 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:20:58.719 04:37:55 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:20:58.719 04:37:55 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:20:58.719 04:37:55 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:20:58.719 04:37:55 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:20:58.719 04:37:55 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:20:58.719 04:37:55 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:20:58.719 04:37:55 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:20:58.719 04:37:55 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:20:58.719 04:37:55 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:20:58.719 04:37:55 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:20:58.719 04:37:55 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:20:58.719 04:37:55 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:20:58.719 04:37:55 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:20:58.719 04:37:55 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:20:58.719 04:37:55 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:20:58.719 04:37:55 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:20:58.719 04:37:55 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:20:58.719 04:37:55 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:58.719 04:37:55 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:58.719 04:37:55 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:58.719 04:37:55 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:58.719 04:37:55 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:58.719 04:37:55 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:58.719 04:37:55 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:20:58.719 04:37:55 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:20:58.719 04:37:55 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:58.719 04:37:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:58.719 04:37:55 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:20:58.719 04:37:55 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90150 00:20:58.719 04:37:55 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:20:58.720 04:37:55 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90150 00:20:58.720 04:37:55 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90150 ']' 00:20:58.720 04:37:55 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.720 04:37:55 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:58.720 04:37:55 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.720 04:37:55 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:58.720 04:37:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:58.720 [2024-11-27 04:37:55.258444] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:58.720 [2024-11-27 04:37:55.258654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90150 ] 00:20:58.979 [2024-11-27 04:37:55.440808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:59.240 [2024-11-27 04:37:55.578508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.240 [2024-11-27 04:37:55.578553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.176 04:37:56 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:00.176 04:37:56 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:21:00.176 04:37:56 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:21:00.176 04:37:56 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:00.176 04:37:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:00.176 04:37:56 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:21:00.176 04:37:56 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:00.176 04:37:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:00.176 04:37:56 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:21:00.176 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:21:00.176 ' 00:21:02.081 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:21:02.081 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:21:02.081 04:37:58 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:21:02.081 04:37:58 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:02.081 04:37:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:02.081 04:37:58 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:21:02.081 04:37:58 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:02.081 04:37:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:02.081 04:37:58 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:21:02.081 ' 00:21:03.151 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:21:03.151 04:37:59 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:21:03.151 04:37:59 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:03.151 04:37:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:03.151 04:37:59 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:21:03.151 04:37:59 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:03.151 04:37:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:03.151 04:37:59 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:21:03.151 04:37:59 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:21:03.721 04:38:00 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:21:03.721 04:38:00 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:21:03.721 04:38:00 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:21:03.721 04:38:00 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:03.721 04:38:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:03.721 04:38:00 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:21:03.721 04:38:00 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:03.721 04:38:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:03.721 04:38:00 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:21:03.721 ' 00:21:04.660 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:21:04.918 04:38:01 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:21:04.918 04:38:01 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:04.918 04:38:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:04.918 04:38:01 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:21:04.918 04:38:01 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:04.918 04:38:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:04.918 04:38:01 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:21:04.919 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:21:04.919 ' 00:21:06.297 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:21:06.297 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:21:06.556 04:38:02 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:21:06.556 04:38:02 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:06.556 04:38:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:06.556 04:38:03 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90150 00:21:06.556 04:38:03 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90150 ']' 00:21:06.556 04:38:03 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90150 00:21:06.557 04:38:03 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:21:06.557 04:38:03 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:06.557 04:38:03 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90150 00:21:06.557 killing process with pid 90150 00:21:06.557 04:38:03 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:06.557 04:38:03 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:06.557 04:38:03 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90150' 00:21:06.557 04:38:03 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90150 00:21:06.557 04:38:03 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90150 00:21:09.093 04:38:05 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:21:09.093 Process with pid 90150 is not found 00:21:09.093 04:38:05 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90150 ']' 00:21:09.093 04:38:05 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90150 00:21:09.093 04:38:05 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90150 ']' 00:21:09.093 04:38:05 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90150 00:21:09.093 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90150) - No such process 00:21:09.093 04:38:05 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90150 is not found' 00:21:09.093 04:38:05 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:21:09.093 04:38:05 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:21:09.093 04:38:05 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:21:09.093 04:38:05 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:21:09.093 00:21:09.093 real 0m10.768s 00:21:09.093 user 0m22.354s 00:21:09.093 sys 0m1.142s 00:21:09.093 04:38:05 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:09.093 04:38:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:09.093 ************************************ 00:21:09.093 END TEST spdkcli_raid 00:21:09.093 ************************************ 00:21:09.352 04:38:05 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:21:09.352 04:38:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:09.352 04:38:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:09.352 04:38:05 -- common/autotest_common.sh@10 -- # set +x 00:21:09.352 ************************************ 00:21:09.352 START TEST blockdev_raid5f 00:21:09.352 ************************************ 00:21:09.352 04:38:05 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:21:09.352 * Looking for test storage... 00:21:09.352 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:21:09.352 04:38:05 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:09.352 04:38:05 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:21:09.352 04:38:05 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:09.352 04:38:05 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:09.352 04:38:05 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:21:09.352 04:38:05 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:09.352 04:38:05 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:09.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.352 --rc genhtml_branch_coverage=1 00:21:09.352 --rc genhtml_function_coverage=1 00:21:09.352 --rc genhtml_legend=1 00:21:09.352 --rc geninfo_all_blocks=1 00:21:09.352 --rc geninfo_unexecuted_blocks=1 00:21:09.352 00:21:09.352 ' 00:21:09.352 04:38:05 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:09.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.352 --rc genhtml_branch_coverage=1 00:21:09.352 --rc genhtml_function_coverage=1 00:21:09.352 --rc genhtml_legend=1 00:21:09.352 --rc geninfo_all_blocks=1 00:21:09.352 --rc geninfo_unexecuted_blocks=1 00:21:09.352 00:21:09.352 ' 00:21:09.352 04:38:05 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:09.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.352 --rc genhtml_branch_coverage=1 00:21:09.352 --rc genhtml_function_coverage=1 00:21:09.352 --rc genhtml_legend=1 00:21:09.352 --rc geninfo_all_blocks=1 00:21:09.352 --rc geninfo_unexecuted_blocks=1 00:21:09.352 00:21:09.352 ' 00:21:09.352 04:38:05 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:09.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.352 --rc genhtml_branch_coverage=1 00:21:09.352 --rc genhtml_function_coverage=1 00:21:09.352 --rc genhtml_legend=1 00:21:09.352 --rc geninfo_all_blocks=1 00:21:09.353 --rc geninfo_unexecuted_blocks=1 00:21:09.353 00:21:09.353 ' 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90436 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:21:09.353 04:38:05 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90436 00:21:09.353 04:38:05 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90436 ']' 00:21:09.353 04:38:05 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.353 04:38:05 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:09.353 04:38:05 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.353 04:38:05 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:09.353 04:38:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:09.610 [2024-11-27 04:38:05.974674] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:21:09.610 [2024-11-27 04:38:05.975590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90436 ] 00:21:09.610 [2024-11-27 04:38:06.142489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.868 [2024-11-27 04:38:06.289027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.806 04:38:07 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:10.806 04:38:07 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:21:10.806 04:38:07 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:21:10.806 04:38:07 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:21:10.806 04:38:07 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:21:10.806 04:38:07 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.806 04:38:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:10.806 Malloc0 00:21:10.806 Malloc1 00:21:10.806 Malloc2 00:21:10.806 04:38:07 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.806 04:38:07 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:21:10.806 04:38:07 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.806 04:38:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:11.065 04:38:07 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.065 04:38:07 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:21:11.065 04:38:07 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:21:11.065 04:38:07 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.065 04:38:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:11.065 04:38:07 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.065 04:38:07 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:21:11.065 04:38:07 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.065 04:38:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:11.065 04:38:07 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.065 04:38:07 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:21:11.065 04:38:07 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.065 04:38:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:11.065 04:38:07 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.065 04:38:07 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:21:11.065 04:38:07 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:21:11.065 04:38:07 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:21:11.065 04:38:07 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.065 04:38:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:11.065 04:38:07 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.065 04:38:07 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:21:11.065 04:38:07 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:21:11.065 04:38:07 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "1f5abf1c-a308-47bc-bc48-74409c87a22b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "1f5abf1c-a308-47bc-bc48-74409c87a22b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "1f5abf1c-a308-47bc-bc48-74409c87a22b",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "adbfaef0-3981-482a-bdd7-e2dff0217809",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "2cd7bcf7-aca1-494d-abfb-e2322cd51d68",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "0c660788-dcc4-4a41-9e2d-d6c5ba214ef3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:21:11.065 04:38:07 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:21:11.065 04:38:07 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:21:11.065 04:38:07 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:21:11.065 04:38:07 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90436 00:21:11.065 04:38:07 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90436 ']' 00:21:11.065 04:38:07 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90436 00:21:11.065 04:38:07 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:21:11.065 04:38:07 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:11.065 04:38:07 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90436 00:21:11.065 04:38:07 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:11.065 04:38:07 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:11.065 04:38:07 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90436' 00:21:11.065 killing process with pid 90436 00:21:11.065 04:38:07 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90436 00:21:11.065 04:38:07 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90436 00:21:14.378 04:38:10 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:14.378 04:38:10 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:21:14.378 04:38:10 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:14.378 04:38:10 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:14.378 04:38:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:14.378 ************************************ 00:21:14.378 START TEST bdev_hello_world 00:21:14.378 ************************************ 00:21:14.378 04:38:10 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:21:14.378 [2024-11-27 04:38:10.564379] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:21:14.378 [2024-11-27 04:38:10.564612] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90503 ] 00:21:14.378 [2024-11-27 04:38:10.743249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.378 [2024-11-27 04:38:10.861542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.944 [2024-11-27 04:38:11.411436] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:21:14.944 [2024-11-27 04:38:11.411602] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:21:14.944 [2024-11-27 04:38:11.411628] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:21:14.944 [2024-11-27 04:38:11.412283] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:21:14.944 [2024-11-27 04:38:11.412451] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:21:14.944 [2024-11-27 04:38:11.412535] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:21:14.944 [2024-11-27 04:38:11.412603] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:21:14.944 00:21:14.944 [2024-11-27 04:38:11.412625] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:21:16.316 ************************************ 00:21:16.316 END TEST bdev_hello_world 00:21:16.316 ************************************ 00:21:16.316 00:21:16.316 real 0m2.379s 00:21:16.316 user 0m2.017s 00:21:16.316 sys 0m0.239s 00:21:16.316 04:38:12 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.316 04:38:12 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:21:16.574 04:38:12 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:21:16.574 04:38:12 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:16.574 04:38:12 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:16.574 04:38:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:16.574 ************************************ 00:21:16.574 START TEST bdev_bounds 00:21:16.574 ************************************ 00:21:16.574 04:38:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:21:16.574 04:38:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90545 00:21:16.574 04:38:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:16.574 04:38:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:21:16.574 04:38:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90545' 00:21:16.574 Process bdevio pid: 90545 00:21:16.574 04:38:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90545 00:21:16.574 04:38:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90545 ']' 00:21:16.574 04:38:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.574 04:38:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.574 04:38:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.574 04:38:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.574 04:38:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:16.574 [2024-11-27 04:38:13.024798] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:21:16.574 [2024-11-27 04:38:13.024954] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90545 ] 00:21:16.832 [2024-11-27 04:38:13.203658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:16.832 [2024-11-27 04:38:13.326787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.832 [2024-11-27 04:38:13.326939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.832 [2024-11-27 04:38:13.326975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.398 04:38:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.398 04:38:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:21:17.398 04:38:13 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:21:17.656 I/O targets: 00:21:17.656 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:21:17.656 00:21:17.656 00:21:17.656 CUnit - A unit testing framework for C - Version 2.1-3 00:21:17.656 http://cunit.sourceforge.net/ 00:21:17.656 00:21:17.656 00:21:17.656 Suite: bdevio tests on: raid5f 00:21:17.656 Test: blockdev write read block ...passed 00:21:17.656 Test: blockdev write zeroes read block ...passed 00:21:17.656 Test: blockdev write zeroes read no split ...passed 00:21:17.656 Test: blockdev write zeroes read split ...passed 00:21:17.914 Test: blockdev write zeroes read split partial ...passed 00:21:17.914 Test: blockdev reset ...passed 00:21:17.914 Test: blockdev write read 8 blocks ...passed 00:21:17.914 Test: blockdev write read size > 128k ...passed 00:21:17.914 Test: blockdev write read invalid size ...passed 00:21:17.914 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:17.914 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:17.914 Test: blockdev write read max offset ...passed 00:21:17.914 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:17.914 Test: blockdev writev readv 8 blocks ...passed 00:21:17.914 Test: blockdev writev readv 30 x 1block ...passed 00:21:17.914 Test: blockdev writev readv block ...passed 00:21:17.914 Test: blockdev writev readv size > 128k ...passed 00:21:17.914 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:17.914 Test: blockdev comparev and writev ...passed 00:21:17.914 Test: blockdev nvme passthru rw ...passed 00:21:17.914 Test: blockdev nvme passthru vendor specific ...passed 00:21:17.914 Test: blockdev nvme admin passthru ...passed 00:21:17.914 Test: blockdev copy ...passed 00:21:17.914 00:21:17.914 Run Summary: Type Total Ran Passed Failed Inactive 00:21:17.914 suites 1 1 n/a 0 0 00:21:17.914 tests 23 23 23 0 0 00:21:17.914 asserts 130 130 130 0 n/a 00:21:17.914 00:21:17.914 Elapsed time = 0.624 seconds 00:21:17.914 0 00:21:17.914 04:38:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90545 00:21:17.914 04:38:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90545 ']' 00:21:17.914 04:38:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90545 00:21:17.914 04:38:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:21:17.914 04:38:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.914 04:38:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90545 00:21:17.914 04:38:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:17.914 04:38:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:17.914 04:38:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90545' 00:21:17.914 killing process with pid 90545 00:21:17.914 04:38:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90545 00:21:17.914 04:38:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90545 00:21:19.817 ************************************ 00:21:19.817 END TEST bdev_bounds 00:21:19.817 ************************************ 00:21:19.817 04:38:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:21:19.817 00:21:19.817 real 0m3.050s 00:21:19.817 user 0m7.623s 00:21:19.817 sys 0m0.404s 00:21:19.817 04:38:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:19.817 04:38:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:19.817 04:38:16 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:21:19.817 04:38:16 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:19.817 04:38:16 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:19.817 04:38:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:19.817 ************************************ 00:21:19.817 START TEST bdev_nbd 00:21:19.817 ************************************ 00:21:19.817 04:38:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:21:19.817 04:38:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:21:19.817 04:38:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:21:19.817 04:38:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:19.817 04:38:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:19.817 04:38:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:21:19.817 04:38:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:21:19.817 04:38:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:21:19.817 04:38:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:21:19.817 04:38:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:21:19.817 04:38:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:21:19.817 04:38:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:21:19.817 04:38:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:21:19.817 04:38:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:21:19.817 04:38:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:21:19.817 04:38:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:21:19.817 04:38:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90616 00:21:19.817 04:38:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:19.817 04:38:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:21:19.817 04:38:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90616 /var/tmp/spdk-nbd.sock 00:21:19.817 04:38:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90616 ']' 00:21:19.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:19.817 04:38:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:19.817 04:38:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.817 04:38:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:19.817 04:38:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.817 04:38:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:19.817 [2024-11-27 04:38:16.144793] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:21:19.817 [2024-11-27 04:38:16.145014] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.817 [2024-11-27 04:38:16.325931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.076 [2024-11-27 04:38:16.445494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.643 04:38:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:20.643 04:38:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:21:20.643 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:21:20.643 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:20.643 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:21:20.643 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:21:20.643 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:21:20.643 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:20.643 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:21:20.643 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:21:20.643 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:21:20.643 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:21:20.643 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:21:20.643 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:21:20.643 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:21:20.903 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:21:20.903 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:21:20.903 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:21:20.903 04:38:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:20.903 04:38:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:20.903 04:38:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:20.903 04:38:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:20.903 04:38:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:20.903 04:38:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:20.903 04:38:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:20.903 04:38:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:20.903 04:38:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:20.903 1+0 records in 00:21:20.903 1+0 records out 00:21:20.903 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0005953 s, 6.9 MB/s 00:21:20.903 04:38:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:20.903 04:38:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:20.903 04:38:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:20.903 04:38:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:20.903 04:38:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:20.903 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:20.903 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:21:20.903 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:21.163 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:21:21.163 { 00:21:21.163 "nbd_device": "/dev/nbd0", 00:21:21.163 "bdev_name": "raid5f" 00:21:21.163 } 00:21:21.163 ]' 00:21:21.163 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:21:21.163 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:21:21.163 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:21:21.163 { 00:21:21.163 "nbd_device": "/dev/nbd0", 00:21:21.163 "bdev_name": "raid5f" 00:21:21.163 } 00:21:21.163 ]' 00:21:21.163 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:21.163 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:21.163 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:21.163 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:21.163 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:21.163 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:21.163 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:21.422 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:21.422 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:21.422 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:21.422 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:21.422 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:21.422 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:21.422 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:21.422 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:21.422 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:21.422 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:21.422 04:38:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:21.682 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:21:21.942 /dev/nbd0 00:21:21.942 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:21.942 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:21.942 04:38:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:21.942 04:38:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:21.942 04:38:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:21.942 04:38:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:21.942 04:38:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:21.942 04:38:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:21.942 04:38:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:21.942 04:38:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:21.942 04:38:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:21.942 1+0 records in 00:21:21.942 1+0 records out 00:21:21.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552466 s, 7.4 MB/s 00:21:21.942 04:38:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:21.942 04:38:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:21.942 04:38:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:21.942 04:38:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:21.942 04:38:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:21.942 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:21.942 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:21.942 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:21.942 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:21.942 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:21:22.201 { 00:21:22.201 "nbd_device": "/dev/nbd0", 00:21:22.201 "bdev_name": "raid5f" 00:21:22.201 } 00:21:22.201 ]' 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:21:22.201 { 00:21:22.201 "nbd_device": "/dev/nbd0", 00:21:22.201 "bdev_name": "raid5f" 00:21:22.201 } 00:21:22.201 ]' 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:21:22.201 256+0 records in 00:21:22.201 256+0 records out 00:21:22.201 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130215 s, 80.5 MB/s 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:21:22.201 256+0 records in 00:21:22.201 256+0 records out 00:21:22.201 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.040422 s, 25.9 MB/s 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:22.201 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:21:22.460 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:22.460 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:22.460 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:22.460 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:22.460 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:22.460 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:22.460 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:22.460 04:38:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:22.718 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:22.718 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:22.718 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:22.718 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:22.718 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:22.718 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:22.718 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:22.718 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:22.718 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:22.718 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:22.718 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:22.976 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:22.976 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:22.976 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:22.976 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:22.976 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:22.976 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:22.976 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:22.976 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:22.976 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:22.976 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:21:22.976 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:22.976 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:21:22.976 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:22.976 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:22.976 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:21:22.976 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:21:23.237 malloc_lvol_verify 00:21:23.237 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:21:23.495 a8b19ded-6631-4ddf-b403-9c3300890a70 00:21:23.495 04:38:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:21:23.753 965742c5-b116-48a4-8bc1-fe9805f45815 00:21:23.753 04:38:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:21:24.011 /dev/nbd0 00:21:24.011 04:38:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:21:24.011 04:38:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:21:24.011 04:38:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:21:24.011 04:38:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:21:24.011 04:38:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:21:24.011 mke2fs 1.47.0 (5-Feb-2023) 00:21:24.011 Discarding device blocks: 0/4096 done 00:21:24.011 Creating filesystem with 4096 1k blocks and 1024 inodes 00:21:24.011 00:21:24.011 Allocating group tables: 0/1 done 00:21:24.011 Writing inode tables: 0/1 done 00:21:24.011 Creating journal (1024 blocks): done 00:21:24.011 Writing superblocks and filesystem accounting information: 0/1 done 00:21:24.011 00:21:24.011 04:38:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:24.011 04:38:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:24.011 04:38:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:24.011 04:38:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:24.011 04:38:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:24.011 04:38:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:24.011 04:38:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:24.270 04:38:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:24.270 04:38:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:24.270 04:38:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:24.270 04:38:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:24.270 04:38:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:24.270 04:38:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:24.270 04:38:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:24.270 04:38:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:24.270 04:38:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90616 00:21:24.270 04:38:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90616 ']' 00:21:24.270 04:38:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90616 00:21:24.270 04:38:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:21:24.270 04:38:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:24.270 04:38:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90616 00:21:24.270 killing process with pid 90616 00:21:24.270 04:38:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:24.270 04:38:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:24.270 04:38:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90616' 00:21:24.270 04:38:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90616 00:21:24.270 04:38:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90616 00:21:26.175 ************************************ 00:21:26.175 END TEST bdev_nbd 00:21:26.175 ************************************ 00:21:26.175 04:38:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:21:26.175 00:21:26.175 real 0m6.257s 00:21:26.175 user 0m8.620s 00:21:26.175 sys 0m1.386s 00:21:26.175 04:38:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:26.175 04:38:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:26.175 04:38:22 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:21:26.175 04:38:22 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:21:26.175 04:38:22 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:21:26.175 04:38:22 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:21:26.175 04:38:22 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:26.175 04:38:22 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:26.175 04:38:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:26.175 ************************************ 00:21:26.175 START TEST bdev_fio 00:21:26.175 ************************************ 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:21:26.175 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:26.175 ************************************ 00:21:26.175 START TEST bdev_fio_rw_verify 00:21:26.175 ************************************ 00:21:26.175 04:38:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:26.176 04:38:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:26.176 04:38:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:26.176 04:38:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:26.176 04:38:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:26.176 04:38:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:26.176 04:38:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:21:26.176 04:38:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:26.176 04:38:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:26.176 04:38:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:26.176 04:38:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:21:26.176 04:38:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:26.176 04:38:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:26.176 04:38:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:26.176 04:38:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:21:26.176 04:38:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:26.176 04:38:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:26.435 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:26.435 fio-3.35 00:21:26.435 Starting 1 thread 00:21:38.645 00:21:38.645 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90818: Wed Nov 27 04:38:33 2024 00:21:38.645 read: IOPS=9319, BW=36.4MiB/s (38.2MB/s)(364MiB/10001msec) 00:21:38.645 slat (nsec): min=19721, max=92784, avg=26082.72, stdev=3846.33 00:21:38.645 clat (usec): min=11, max=489, avg=169.82, stdev=64.00 00:21:38.645 lat (usec): min=36, max=537, avg=195.90, stdev=64.95 00:21:38.645 clat percentiles (usec): 00:21:38.645 | 50.000th=[ 172], 99.000th=[ 306], 99.900th=[ 355], 99.990th=[ 420], 00:21:38.645 | 99.999th=[ 490] 00:21:38.645 write: IOPS=9798, BW=38.3MiB/s (40.1MB/s)(377MiB/9861msec); 0 zone resets 00:21:38.645 slat (usec): min=8, max=180, avg=22.02, stdev= 5.42 00:21:38.645 clat (usec): min=79, max=1037, avg=389.28, stdev=64.06 00:21:38.645 lat (usec): min=101, max=1212, avg=411.29, stdev=66.47 00:21:38.645 clat percentiles (usec): 00:21:38.645 | 50.000th=[ 388], 99.000th=[ 578], 99.900th=[ 676], 99.990th=[ 930], 00:21:38.645 | 99.999th=[ 1037] 00:21:38.645 bw ( KiB/s): min=34000, max=42328, per=98.52%, avg=38614.58, stdev=2263.05, samples=19 00:21:38.645 iops : min= 8500, max=10582, avg=9653.63, stdev=565.78, samples=19 00:21:38.645 lat (usec) : 20=0.01%, 50=0.01%, 100=9.27%, 250=34.32%, 500=54.29% 00:21:38.645 lat (usec) : 750=2.08%, 1000=0.02% 00:21:38.645 lat (msec) : 2=0.01% 00:21:38.645 cpu : usr=98.99%, sys=0.33%, ctx=29, majf=0, minf=7971 00:21:38.645 IO depths : 1=7.6%, 2=19.8%, 4=55.2%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:38.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.645 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.645 issued rwts: total=93205,96622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.645 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:38.645 00:21:38.645 Run status group 0 (all jobs): 00:21:38.645 READ: bw=36.4MiB/s (38.2MB/s), 36.4MiB/s-36.4MiB/s (38.2MB/s-38.2MB/s), io=364MiB (382MB), run=10001-10001msec 00:21:38.645 WRITE: bw=38.3MiB/s (40.1MB/s), 38.3MiB/s-38.3MiB/s (40.1MB/s-40.1MB/s), io=377MiB (396MB), run=9861-9861msec 00:21:38.904 ----------------------------------------------------- 00:21:38.904 Suppressions used: 00:21:38.904 count bytes template 00:21:38.904 1 7 /usr/src/fio/parse.c 00:21:38.904 512 49152 /usr/src/fio/iolog.c 00:21:38.904 1 8 libtcmalloc_minimal.so 00:21:38.904 1 904 libcrypto.so 00:21:38.904 ----------------------------------------------------- 00:21:38.904 00:21:38.904 00:21:38.904 real 0m12.930s 00:21:38.904 user 0m13.147s 00:21:38.904 sys 0m0.678s 00:21:38.904 ************************************ 00:21:38.904 END TEST bdev_fio_rw_verify 00:21:38.904 ************************************ 00:21:38.904 04:38:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:38.904 04:38:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:21:39.163 04:38:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:21:39.163 04:38:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:39.163 04:38:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:21:39.163 04:38:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:39.163 04:38:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:21:39.163 04:38:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:21:39.163 04:38:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:21:39.163 04:38:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:21:39.163 04:38:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:39.163 04:38:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:21:39.163 04:38:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:21:39.163 04:38:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:39.163 04:38:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:21:39.163 04:38:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:21:39.163 04:38:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:21:39.163 04:38:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:21:39.163 04:38:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:21:39.163 04:38:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "1f5abf1c-a308-47bc-bc48-74409c87a22b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "1f5abf1c-a308-47bc-bc48-74409c87a22b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "1f5abf1c-a308-47bc-bc48-74409c87a22b",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "adbfaef0-3981-482a-bdd7-e2dff0217809",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "2cd7bcf7-aca1-494d-abfb-e2322cd51d68",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "0c660788-dcc4-4a41-9e2d-d6c5ba214ef3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:21:39.163 04:38:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:21:39.163 04:38:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:39.163 /home/vagrant/spdk_repo/spdk 00:21:39.163 ************************************ 00:21:39.163 END TEST bdev_fio 00:21:39.163 ************************************ 00:21:39.163 04:38:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:21:39.163 04:38:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:21:39.163 04:38:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:21:39.163 00:21:39.163 real 0m13.216s 00:21:39.163 user 0m13.286s 00:21:39.163 sys 0m0.801s 00:21:39.163 04:38:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:39.163 04:38:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:39.163 04:38:35 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:39.163 04:38:35 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:39.163 04:38:35 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:39.163 04:38:35 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:39.163 04:38:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:39.163 ************************************ 00:21:39.163 START TEST bdev_verify 00:21:39.163 ************************************ 00:21:39.163 04:38:35 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:39.163 [2024-11-27 04:38:35.734885] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:21:39.163 [2024-11-27 04:38:35.735072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90986 ] 00:21:39.422 [2024-11-27 04:38:35.913272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:39.681 [2024-11-27 04:38:36.031596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.681 [2024-11-27 04:38:36.031632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.247 Running I/O for 5 seconds... 00:21:42.123 8576.00 IOPS, 33.50 MiB/s [2024-11-27T04:38:39.646Z] 8582.00 IOPS, 33.52 MiB/s [2024-11-27T04:38:41.026Z] 8659.33 IOPS, 33.83 MiB/s [2024-11-27T04:38:41.961Z] 8615.25 IOPS, 33.65 MiB/s [2024-11-27T04:38:41.961Z] 8585.60 IOPS, 33.54 MiB/s 00:21:45.374 Latency(us) 00:21:45.374 [2024-11-27T04:38:41.961Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.374 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:45.374 Verification LBA range: start 0x0 length 0x2000 00:21:45.374 raid5f : 5.02 3939.35 15.39 0.00 0.00 48946.73 309.44 38234.10 00:21:45.374 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:45.374 Verification LBA range: start 0x2000 length 0x2000 00:21:45.374 raid5f : 5.01 4644.39 18.14 0.00 0.00 41543.63 329.11 31136.75 00:21:45.374 [2024-11-27T04:38:41.961Z] =================================================================================================================== 00:21:45.374 [2024-11-27T04:38:41.961Z] Total : 8583.74 33.53 0.00 0.00 44944.18 309.44 38234.10 00:21:46.748 00:21:46.748 real 0m7.580s 00:21:46.748 user 0m13.993s 00:21:46.748 sys 0m0.280s 00:21:46.748 04:38:43 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:46.748 04:38:43 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:21:46.748 ************************************ 00:21:46.748 END TEST bdev_verify 00:21:46.748 ************************************ 00:21:46.748 04:38:43 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:46.748 04:38:43 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:46.748 04:38:43 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:46.748 04:38:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:46.748 ************************************ 00:21:46.748 START TEST bdev_verify_big_io 00:21:46.748 ************************************ 00:21:46.748 04:38:43 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:47.048 [2024-11-27 04:38:43.385151] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:21:47.048 [2024-11-27 04:38:43.385371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91079 ] 00:21:47.048 [2024-11-27 04:38:43.561073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:47.310 [2024-11-27 04:38:43.690901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.310 [2024-11-27 04:38:43.690941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.877 Running I/O for 5 seconds... 00:21:50.188 506.00 IOPS, 31.62 MiB/s [2024-11-27T04:38:47.710Z] 634.00 IOPS, 39.62 MiB/s [2024-11-27T04:38:48.659Z] 676.67 IOPS, 42.29 MiB/s [2024-11-27T04:38:49.594Z] 698.00 IOPS, 43.62 MiB/s [2024-11-27T04:38:49.852Z] 723.20 IOPS, 45.20 MiB/s 00:21:53.265 Latency(us) 00:21:53.265 [2024-11-27T04:38:49.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.265 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:53.265 Verification LBA range: start 0x0 length 0x200 00:21:53.265 raid5f : 5.27 372.92 23.31 0.00 0.00 8373912.08 192.28 353493.74 00:21:53.265 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:53.265 Verification LBA range: start 0x200 length 0x200 00:21:53.265 raid5f : 5.34 380.08 23.76 0.00 0.00 8269782.32 200.33 349830.60 00:21:53.265 [2024-11-27T04:38:49.852Z] =================================================================================================================== 00:21:53.265 [2024-11-27T04:38:49.852Z] Total : 753.01 47.06 0.00 0.00 8321026.56 192.28 353493.74 00:21:55.168 00:21:55.168 real 0m8.029s 00:21:55.168 user 0m14.886s 00:21:55.168 sys 0m0.284s 00:21:55.168 ************************************ 00:21:55.168 END TEST bdev_verify_big_io 00:21:55.168 ************************************ 00:21:55.168 04:38:51 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:55.168 04:38:51 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:55.168 04:38:51 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:55.168 04:38:51 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:55.168 04:38:51 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:55.168 04:38:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:55.168 ************************************ 00:21:55.168 START TEST bdev_write_zeroes 00:21:55.168 ************************************ 00:21:55.168 04:38:51 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:55.168 [2024-11-27 04:38:51.459776] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:21:55.168 [2024-11-27 04:38:51.459981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91183 ] 00:21:55.168 [2024-11-27 04:38:51.633766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.427 [2024-11-27 04:38:51.763788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.994 Running I/O for 1 seconds... 00:21:56.931 20967.00 IOPS, 81.90 MiB/s 00:21:56.931 Latency(us) 00:21:56.931 [2024-11-27T04:38:53.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.931 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:56.931 raid5f : 1.01 20958.23 81.87 0.00 0.00 6085.69 1745.72 7955.90 00:21:56.931 [2024-11-27T04:38:53.518Z] =================================================================================================================== 00:21:56.931 [2024-11-27T04:38:53.518Z] Total : 20958.23 81.87 0.00 0.00 6085.69 1745.72 7955.90 00:21:58.835 00:21:58.835 real 0m3.598s 00:21:58.835 user 0m3.222s 00:21:58.835 sys 0m0.246s 00:21:58.835 04:38:54 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:58.835 04:38:54 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:58.835 ************************************ 00:21:58.835 END TEST bdev_write_zeroes 00:21:58.835 ************************************ 00:21:58.835 04:38:55 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:58.835 04:38:55 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:58.835 04:38:55 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:58.835 04:38:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:58.835 ************************************ 00:21:58.835 START TEST bdev_json_nonenclosed 00:21:58.835 ************************************ 00:21:58.835 04:38:55 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:58.835 [2024-11-27 04:38:55.139776] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:21:58.835 [2024-11-27 04:38:55.139985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91242 ] 00:21:58.835 [2024-11-27 04:38:55.320418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.094 [2024-11-27 04:38:55.455148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.094 [2024-11-27 04:38:55.455249] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:59.094 [2024-11-27 04:38:55.455280] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:59.094 [2024-11-27 04:38:55.455291] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:59.352 00:21:59.352 real 0m0.695s 00:21:59.352 user 0m0.468s 00:21:59.352 sys 0m0.121s 00:21:59.352 ************************************ 00:21:59.352 END TEST bdev_json_nonenclosed 00:21:59.352 ************************************ 00:21:59.352 04:38:55 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:59.352 04:38:55 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:59.352 04:38:55 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:59.352 04:38:55 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:59.352 04:38:55 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:59.352 04:38:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:59.352 ************************************ 00:21:59.352 START TEST bdev_json_nonarray 00:21:59.352 ************************************ 00:21:59.352 04:38:55 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:59.352 [2024-11-27 04:38:55.900011] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:21:59.352 [2024-11-27 04:38:55.900317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91267 ] 00:21:59.610 [2024-11-27 04:38:56.082916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.868 [2024-11-27 04:38:56.214494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.868 [2024-11-27 04:38:56.214688] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:59.868 [2024-11-27 04:38:56.214760] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:59.868 [2024-11-27 04:38:56.214842] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:00.127 00:22:00.127 real 0m0.704s 00:22:00.127 user 0m0.475s 00:22:00.127 sys 0m0.122s 00:22:00.127 04:38:56 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:00.127 04:38:56 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:22:00.127 ************************************ 00:22:00.127 END TEST bdev_json_nonarray 00:22:00.127 ************************************ 00:22:00.127 04:38:56 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:22:00.127 04:38:56 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:22:00.127 04:38:56 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:22:00.127 04:38:56 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:22:00.127 04:38:56 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:22:00.127 04:38:56 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:22:00.127 04:38:56 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:00.127 04:38:56 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:22:00.127 04:38:56 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:22:00.127 04:38:56 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:22:00.127 04:38:56 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:22:00.127 00:22:00.127 real 0m50.839s 00:22:00.127 user 1m9.585s 00:22:00.127 sys 0m4.897s 00:22:00.127 ************************************ 00:22:00.127 END TEST blockdev_raid5f 00:22:00.127 ************************************ 00:22:00.127 04:38:56 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:00.127 04:38:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:00.127 04:38:56 -- spdk/autotest.sh@194 -- # uname -s 00:22:00.127 04:38:56 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:22:00.127 04:38:56 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:22:00.127 04:38:56 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:22:00.127 04:38:56 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:22:00.127 04:38:56 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:22:00.127 04:38:56 -- spdk/autotest.sh@260 -- # timing_exit lib 00:22:00.127 04:38:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:00.127 04:38:56 -- common/autotest_common.sh@10 -- # set +x 00:22:00.127 04:38:56 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:22:00.127 04:38:56 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:22:00.127 04:38:56 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:22:00.127 04:38:56 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:00.127 04:38:56 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:00.127 04:38:56 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:22:00.127 04:38:56 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:22:00.127 04:38:56 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:22:00.127 04:38:56 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:22:00.127 04:38:56 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:22:00.127 04:38:56 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:22:00.127 04:38:56 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:22:00.127 04:38:56 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:22:00.127 04:38:56 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:22:00.127 04:38:56 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:22:00.127 04:38:56 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:22:00.127 04:38:56 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:22:00.127 04:38:56 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:22:00.127 04:38:56 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:22:00.127 04:38:56 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:22:00.127 04:38:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:00.128 04:38:56 -- common/autotest_common.sh@10 -- # set +x 00:22:00.128 04:38:56 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:22:00.128 04:38:56 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:22:00.128 04:38:56 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:22:00.128 04:38:56 -- common/autotest_common.sh@10 -- # set +x 00:22:02.658 INFO: APP EXITING 00:22:02.658 INFO: killing all VMs 00:22:02.658 INFO: killing vhost app 00:22:02.658 INFO: EXIT DONE 00:22:02.917 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:02.917 Waiting for block devices as requested 00:22:02.917 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:02.917 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:03.853 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:03.853 Cleaning 00:22:03.853 Removing: /var/run/dpdk/spdk0/config 00:22:03.853 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:03.853 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:03.853 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:03.853 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:03.853 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:03.853 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:03.853 Removing: /dev/shm/spdk_tgt_trace.pid56981 00:22:03.853 Removing: /var/run/dpdk/spdk0 00:22:03.853 Removing: /var/run/dpdk/spdk_pid56729 00:22:03.853 Removing: /var/run/dpdk/spdk_pid56981 00:22:03.853 Removing: /var/run/dpdk/spdk_pid57210 00:22:03.853 Removing: /var/run/dpdk/spdk_pid57314 00:22:03.853 Removing: /var/run/dpdk/spdk_pid57370 00:22:03.853 Removing: /var/run/dpdk/spdk_pid57509 00:22:03.853 Removing: /var/run/dpdk/spdk_pid57527 00:22:03.853 Removing: /var/run/dpdk/spdk_pid57739 00:22:03.853 Removing: /var/run/dpdk/spdk_pid57862 00:22:03.853 Removing: /var/run/dpdk/spdk_pid57975 00:22:03.853 Removing: /var/run/dpdk/spdk_pid58103 00:22:03.853 Removing: /var/run/dpdk/spdk_pid58211 00:22:03.853 Removing: /var/run/dpdk/spdk_pid58256 00:22:03.853 Removing: /var/run/dpdk/spdk_pid58292 00:22:03.853 Removing: /var/run/dpdk/spdk_pid58364 00:22:03.853 Removing: /var/run/dpdk/spdk_pid58491 00:22:03.853 Removing: /var/run/dpdk/spdk_pid58938 00:22:03.853 Removing: /var/run/dpdk/spdk_pid59015 00:22:03.853 Removing: /var/run/dpdk/spdk_pid59089 00:22:03.853 Removing: /var/run/dpdk/spdk_pid59105 00:22:03.853 Removing: /var/run/dpdk/spdk_pid59273 00:22:03.853 Removing: /var/run/dpdk/spdk_pid59289 00:22:03.853 Removing: /var/run/dpdk/spdk_pid59440 00:22:03.853 Removing: /var/run/dpdk/spdk_pid59456 00:22:03.853 Removing: /var/run/dpdk/spdk_pid59526 00:22:03.853 Removing: /var/run/dpdk/spdk_pid59549 00:22:03.853 Removing: /var/run/dpdk/spdk_pid59619 00:22:03.853 Removing: /var/run/dpdk/spdk_pid59648 00:22:03.853 Removing: /var/run/dpdk/spdk_pid59843 00:22:03.853 Removing: /var/run/dpdk/spdk_pid59885 00:22:03.853 Removing: /var/run/dpdk/spdk_pid59974 00:22:03.853 Removing: /var/run/dpdk/spdk_pid61361 00:22:03.853 Removing: /var/run/dpdk/spdk_pid61573 00:22:03.853 Removing: /var/run/dpdk/spdk_pid61713 00:22:03.853 Removing: /var/run/dpdk/spdk_pid62367 00:22:03.853 Removing: /var/run/dpdk/spdk_pid62579 00:22:03.853 Removing: /var/run/dpdk/spdk_pid62730 00:22:03.853 Removing: /var/run/dpdk/spdk_pid63390 00:22:03.853 Removing: /var/run/dpdk/spdk_pid63720 00:22:04.112 Removing: /var/run/dpdk/spdk_pid63860 00:22:04.112 Removing: /var/run/dpdk/spdk_pid65266 00:22:04.112 Removing: /var/run/dpdk/spdk_pid65526 00:22:04.112 Removing: /var/run/dpdk/spdk_pid65672 00:22:04.112 Removing: /var/run/dpdk/spdk_pid67079 00:22:04.112 Removing: /var/run/dpdk/spdk_pid67343 00:22:04.112 Removing: /var/run/dpdk/spdk_pid67493 00:22:04.112 Removing: /var/run/dpdk/spdk_pid68902 00:22:04.112 Removing: /var/run/dpdk/spdk_pid69353 00:22:04.112 Removing: /var/run/dpdk/spdk_pid69499 00:22:04.112 Removing: /var/run/dpdk/spdk_pid70998 00:22:04.112 Removing: /var/run/dpdk/spdk_pid71265 00:22:04.112 Removing: /var/run/dpdk/spdk_pid71417 00:22:04.112 Removing: /var/run/dpdk/spdk_pid72913 00:22:04.112 Removing: /var/run/dpdk/spdk_pid73179 00:22:04.112 Removing: /var/run/dpdk/spdk_pid73329 00:22:04.112 Removing: /var/run/dpdk/spdk_pid74828 00:22:04.112 Removing: /var/run/dpdk/spdk_pid75321 00:22:04.112 Removing: /var/run/dpdk/spdk_pid75472 00:22:04.112 Removing: /var/run/dpdk/spdk_pid75627 00:22:04.112 Removing: /var/run/dpdk/spdk_pid76046 00:22:04.112 Removing: /var/run/dpdk/spdk_pid76783 00:22:04.112 Removing: /var/run/dpdk/spdk_pid77183 00:22:04.112 Removing: /var/run/dpdk/spdk_pid77874 00:22:04.112 Removing: /var/run/dpdk/spdk_pid78317 00:22:04.112 Removing: /var/run/dpdk/spdk_pid79076 00:22:04.112 Removing: /var/run/dpdk/spdk_pid79485 00:22:04.112 Removing: /var/run/dpdk/spdk_pid81459 00:22:04.112 Removing: /var/run/dpdk/spdk_pid81912 00:22:04.112 Removing: /var/run/dpdk/spdk_pid82359 00:22:04.112 Removing: /var/run/dpdk/spdk_pid84473 00:22:04.112 Removing: /var/run/dpdk/spdk_pid84965 00:22:04.112 Removing: /var/run/dpdk/spdk_pid85497 00:22:04.112 Removing: /var/run/dpdk/spdk_pid86597 00:22:04.112 Removing: /var/run/dpdk/spdk_pid86928 00:22:04.112 Removing: /var/run/dpdk/spdk_pid87867 00:22:04.112 Removing: /var/run/dpdk/spdk_pid88194 00:22:04.112 Removing: /var/run/dpdk/spdk_pid89145 00:22:04.112 Removing: /var/run/dpdk/spdk_pid89472 00:22:04.112 Removing: /var/run/dpdk/spdk_pid90150 00:22:04.112 Removing: /var/run/dpdk/spdk_pid90436 00:22:04.112 Removing: /var/run/dpdk/spdk_pid90503 00:22:04.112 Removing: /var/run/dpdk/spdk_pid90545 00:22:04.112 Removing: /var/run/dpdk/spdk_pid90806 00:22:04.112 Removing: /var/run/dpdk/spdk_pid90986 00:22:04.112 Removing: /var/run/dpdk/spdk_pid91079 00:22:04.112 Removing: /var/run/dpdk/spdk_pid91183 00:22:04.112 Removing: /var/run/dpdk/spdk_pid91242 00:22:04.112 Removing: /var/run/dpdk/spdk_pid91267 00:22:04.112 Clean 00:22:04.112 04:39:00 -- common/autotest_common.sh@1453 -- # return 0 00:22:04.112 04:39:00 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:22:04.112 04:39:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:04.112 04:39:00 -- common/autotest_common.sh@10 -- # set +x 00:22:04.372 04:39:00 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:22:04.372 04:39:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:04.372 04:39:00 -- common/autotest_common.sh@10 -- # set +x 00:22:04.372 04:39:00 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:04.372 04:39:00 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:04.372 04:39:00 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:04.372 04:39:00 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:22:04.372 04:39:00 -- spdk/autotest.sh@398 -- # hostname 00:22:04.372 04:39:00 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:04.630 geninfo: WARNING: invalid characters removed from testname! 00:22:31.188 04:39:25 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:31.758 04:39:28 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:34.377 04:39:30 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:36.285 04:39:32 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:38.192 04:39:34 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:40.731 04:39:36 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:42.637 04:39:38 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:42.637 04:39:38 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:42.637 04:39:38 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:22:42.637 04:39:38 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:42.637 04:39:38 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:42.637 04:39:38 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:42.637 + [[ -n 5431 ]] 00:22:42.637 + sudo kill 5431 00:22:42.645 [Pipeline] } 00:22:42.723 [Pipeline] // timeout 00:22:42.729 [Pipeline] } 00:22:42.745 [Pipeline] // stage 00:22:42.752 [Pipeline] } 00:22:42.772 [Pipeline] // catchError 00:22:42.807 [Pipeline] stage 00:22:42.809 [Pipeline] { (Stop VM) 00:22:42.823 [Pipeline] sh 00:22:43.100 + vagrant halt 00:22:45.663 ==> default: Halting domain... 00:22:53.799 [Pipeline] sh 00:22:54.081 + vagrant destroy -f 00:22:57.372 ==> default: Removing domain... 00:22:57.385 [Pipeline] sh 00:22:57.720 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:22:57.728 [Pipeline] } 00:22:57.746 [Pipeline] // stage 00:22:57.753 [Pipeline] } 00:22:57.768 [Pipeline] // dir 00:22:57.774 [Pipeline] } 00:22:57.789 [Pipeline] // wrap 00:22:57.796 [Pipeline] } 00:22:57.811 [Pipeline] // catchError 00:22:57.822 [Pipeline] stage 00:22:57.824 [Pipeline] { (Epilogue) 00:22:57.838 [Pipeline] sh 00:22:58.117 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:04.697 [Pipeline] catchError 00:23:04.699 [Pipeline] { 00:23:04.713 [Pipeline] sh 00:23:04.999 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:04.999 Artifacts sizes are good 00:23:05.008 [Pipeline] } 00:23:05.022 [Pipeline] // catchError 00:23:05.043 [Pipeline] archiveArtifacts 00:23:05.052 Archiving artifacts 00:23:05.158 [Pipeline] cleanWs 00:23:05.170 [WS-CLEANUP] Deleting project workspace... 00:23:05.170 [WS-CLEANUP] Deferred wipeout is used... 00:23:05.176 [WS-CLEANUP] done 00:23:05.178 [Pipeline] } 00:23:05.193 [Pipeline] // stage 00:23:05.197 [Pipeline] } 00:23:05.210 [Pipeline] // node 00:23:05.215 [Pipeline] End of Pipeline 00:23:05.248 Finished: SUCCESS